Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
D
dfaast
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Santhosh Kumar
dfaast
Commits
6a3c55a0
Commit
6a3c55a0
authored
May 04, 2024
by
Santhosh Kumar
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
cleanup code
parent
db69f3ea
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
46 additions
and
3 deletions
+46
-3
report/.graph_service.py.swp
report/.graph_service.py.swp
+0
-0
report/graph_cpu.py
report/graph_cpu.py
+0
-0
report/graph_service.py
report/graph_service.py
+38
-0
report/service_time.png
report/service_time.png
+0
-0
src/example2/function.py
src/example2/function.py
+8
-3
No files found.
report/.graph_service.py.swp
0 → 100644
View file @
6a3c55a0
File added
report/graph.py
→
report/graph
_cpu
.py
View file @
6a3c55a0
File moved
report/graph_service.py
0 → 100644
View file @
6a3c55a0
import
matplotlib.pyplot
as
plt
# Given values
cpu_utilization
=
[
0
,
10
,
30
,
40
,
50
,
60
,
80
,
90
,
100
]
service_time
=
[
1.36
,
1.26
,
1.30
,
1.32
,
1.35
,
1.40
,
1.35
,
1.37
,
1.35
,
]
num_of_pods
=
[
1
,
2
,
5
,
5
,
5
,
5
,
5
,
5
,
5
]
# Plotting
plt
.
figure
(
figsize
=
(
10
,
6
))
# CPU Utilization on x-axis
plt
.
plot
(
cpu_utilization
,
service_time
,
label
=
'Service Time(s)'
,
marker
=
'o'
)
# Number of Pods
plt
.
plot
(
cpu_utilization
,
num_of_pods
,
label
=
'Number of Pods'
,
marker
=
'o'
)
plt
.
title
(
'Service Time and Number of Pods vs. CPU Utilization'
)
plt
.
xlabel
(
'CPU Utilization Limit(
%
)'
)
plt
.
ylabel
(
'Service Time(s)'
)
plt
.
legend
()
plt
.
grid
(
True
)
plt
.
text
(
100
,
5
,
'minReplicas: 1
\n
maxReplicas: 5
\n
Max CPU utilization: 40
%
'
,
bbox
=
dict
(
facecolor
=
'lightblue'
,
alpha
=
0.5
))
for
cpu
,
avg_cpu
,
pods
in
zip
(
cpu_utilization
,
service_time
,
num_of_pods
):
plt
.
text
(
cpu
,
avg_cpu
,
f
'{avg_cpu}'
,
ha
=
'right'
,
va
=
'bottom'
)
plt
.
text
(
cpu
,
pods
,
f
'{pods}'
,
ha
=
'right'
,
va
=
'bottom'
)
plt
.
tight_layout
()
plt
.
show
()
report/service_time.png
0 → 100644
View file @
6a3c55a0
50 KB
src/example2/function.py
View file @
6a3c55a0
...
@@ -4,6 +4,7 @@ from prometheus_client import Counter, Gauge, start_http_server
...
@@ -4,6 +4,7 @@ from prometheus_client import Counter, Gauge, start_http_server
from
prometheus_client
import
generate_latest
,
CONTENT_TYPE_LATEST
,
CollectorRegistry
from
prometheus_client
import
generate_latest
,
CONTENT_TYPE_LATEST
,
CollectorRegistry
import
psutil
import
psutil
import
time
import
time
import
subprocess
import
multiprocessing
import
multiprocessing
...
@@ -122,10 +123,14 @@ def get_memory_usage():
...
@@ -122,10 +123,14 @@ def get_memory_usage():
result
=
subprocess
.
run
([
'top'
,
'-b'
,
'-n'
,
'1'
],
capture_output
=
True
,
text
=
True
)
result
=
subprocess
.
run
([
'top'
,
'-b'
,
'-n'
,
'1'
],
capture_output
=
True
,
text
=
True
)
return
result
.
stdout
return
result
.
stdout
@
app
.
route
(
'/memory
'
)
@
app
.
route
(
'/memory
/<int:memory_size>'
,
methods
=
[
'GET'
]
)
def
manage_memory
():
def
manage_memory
(
memory_size
):
global
stress_process
global
stress_process
size
=
request
.
args
.
get
(
'size'
)
try
:
size
=
int
(
memory_size
)
*
1024
*
1024
except
ValueError
:
return
jsonify
({
'error'
:
'Invalid memory size'
}),
400
if
size
:
if
size
:
if
stress_process
:
if
stress_process
:
# Kill existing stress-ng process
# Kill existing stress-ng process
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment