Introduction

Three types of system evaluation methods

  1. Measurement: experiment with actual system
  2. Analytical soln: exp with math model of system
  3. Simulation: exp with fully executable model of system

Evaluation of how diff params affect model/real systems.

Performance

Performance is how well a system performs job

Application OR Server/Machine usually required to fulfil constraints of

Inputs: #of jobs (workload)

| (latency) v outputs

due to these dimensions and interrelatedness complexity of measurements sharing of rscs (congestion; queue. e.g. memory bandwidth. limited, contenton issued) how does performance scale?

PERFORMANCE COST RATIO

how is pwrf evaled

analytic jobs move from queue X to Y in a certain pattern

simulatiom: modelling of impt features of syst

measurement: the most credinle only works if the system or app already exists may be intrusive to the other parts or other tasks requires ext software tools

Goal:

Knowns and Unknowns

pa tools often check known knowns/unknowns

but impossible to look into unknown unknowns

sys perf in real systems

full computer system, usually models not scalable

usually use tools

Full Stack means from application to baremetal

SDLC additions

Perspectives

Challenges

What is Observability:

can we actually mrasure our things?

we cant measure without observability tools

in prod envs. such tools will steal workload from prod tasks through resource contention

Virtual mAchines cannot directly access such HW counters but progress is being made

Alerts

| Metrics

| Statistics

| Counters

Types of parameters and outputs

Workload params vs System params

Synthetic workload .ust follow the discovered workload parameters