Hello, if you have any need, please feel free to consult us, this is my wechat: wx91due
CS-350 - Fundamentals of Computing Systems
Problem 1
Label each of the statements below with either True (T) or False (F):
Statement T/F
a. If a system has reached steady state, doubling the arrival rate will always cause the average response time to double.
b. In an M/M/N system with per-server utilization ρ, the probability that all the servers are busy is ρ N .
c. In a round-robin system with quantum size q, a job with length C will always be preempted C/q times, with “/” being an integer division.
d. Any taskset that is schedulable under RM will still be schedulable under non-preemptive RM, i.e. RM where preemption has been disabled.
e. Consider multithreaded code using (case A) a spinlock or (case B) a semaphore to implement a mutex. In case B, a process unable to acquire the mutex will never be scheduled, unlike in case A.
f. The Peterson’s Algorithm is safe to be used for 10-party mutual exclusion in pro-cessors that do not perform out-of-order execution.
g. Under identical conditions, one should expect a better average response time in an M/M/1 system compared to the case where SRT is used instead of FIFO scheduling.
h. Under RM, A task with a long period that overruns its WCET might miss a deadline, but it cannot cause a deadline miss in another task with a shorter period.
i. When predicting job lengths using sliding window averages, a larger window size w causes the system to consume more memory.
j. An M/D/1 system is a good model to be used when requests are strictly periodic, i.e. the inter-arrival time between two subsequent requests is a constant.
Note: There are 10 questions. A correct answer will get you 2 points; an incorrect answer -1 points; a blank answer 0 points. The score is capped at 14.
Problem 2
A multi-threaded image processing server uses multiple worker threads executing the worker_main(...) function (partially) provided in Listing 2. Workers process requests for operations over im-ages identified by the img_id field in the request struct. Requests are taken from a shared queue the_queue using the get_from_queue(...) procedure to fetch the next request in FIFO order. A parent thread (code omitted) is responsible for adding requests into the queue. It is crucial that requests targeting the same img_id are completed in their order of arrival.
(a) [6 points] Complete the missing lines of code. You should only have at most one statement per blank. A semi-column completes a statement. So you are not allowed to squeeze in multiple statements separated by a semi-column on the same blank. A curly bracket completes a statement too.
(b) [5 points] You are evaluating the performance of your server by initially limiting the number of workers to 1. You have already measured the distribution of the length (not response time!!) of image processing requests arriving at the server. The annotated CDF of that distribution is provided in Figure 1. If the client was to continuously send on average 20 requests per second, what utilization do you expect to measure for your server?
(c) [7 points] Refer again to the situation considered in Q(b). What is the response time average that you should expect? What additional assumptions are you making to provide this prediction?
Problem 3
A data analysis pipeline operates in three stages. In stage A, parsing, authentication, and signature validation is performed in a single step. Next, the data is sent to stage B for decompression. The uncompressed data is finally sent to stage C for analysis and the result leaves the system. In the initial version of the pipeline, each stage is implemented using a single-processor machine with a FIFO queue. It appears that arrivals of data analysis requests from the outside are Poisson-distributed with an average arrival rate of 10 requests per second. In this initial version of the system, the time it takes for a request to be processed at the three stages is unknown.
(a) [2 points] Provide the initial system diagram, along with the model(s) you intend to use to analyze its stages and any assumption that you need to be able to reason about the system.
(b) [3 points] Derive a set of constraints on the processing time of requests at the three stages that need to hold for the system to be able to reach steady state.
(c) [2 points] The initial system is modified to allow data to be structured in nested compressed objects. As such, after decompression at stage B, there might be nested compressed objects that need to be decompressed further. As such, with 30% proba-bility, requests leaving stage B will go back to stage B as if they were new requests.
Produce below the updated system diagram, but no need to repeat the assumptions you made in Q (a).
(d) [5 points] It was finally benchmarked that on average, stage-A processing takes 80 mil-liseconds; stage-B processing takes 30 milliseconds; and stage-C processing takes 50 mil-liseconds. What is the end-to-end response time for a generic request arriving at the system?
(e) [2 points] The system is now updated to allow processed data to output intermediate compressed results that need to be (1) decompressed and (2) further processed. As such, requests leaving stage C might be sent back to stage B. This happens, on average, for 98 out of 200 requests. The processing time at the stages is unchanged from Q(d). Provide the new system diagram—once again no need to re-state the assumptions.
(f) [5 points] What is the capacity of the new system?
Problem 4
A multi-user system is designed to accept requests coming from 3 different users. All the users send their first request at time 0, and all the users’ requests are added to a single queue and processed by a single-processor system. Requests from different users can be safely reordered (if deemed convenient) and even be preempted. But it is crucial that all the requests from the same user are served strictly in order or arrival. We know the following about our beloved users:
(1) User #1 sends requests that are uniformly distributed with lengths in the range [2, 6] seconds. They send one request every 15 seconds.
(2) User #2 sends requests with length following an unknown distribution with a mean of 1 second, a minimum of 0.5 seconds and a maximum of 2 seconds. They do so every 7 seconds.
(3) User #3 sends requests that have a constant 1-second length. They do so every 4 seconds.
(a) [5 points] At a random point in time, you observe the following schedule over a 30-seconds time window. What scheduler is the system employing?
(b) [3 points] What is the average response time that each user is experiencing with the requests that complete in the considered time window?
(c) [5 points] Devise a simple scheduling strategy that arises from the knowledge you have about your users that minimizes the average response time for all the users. Briefly describe the strategy and draw the schedule that it would produce when all the requests exhibit their mean length. Use higher-user-ID first in case of ties.
(d) [5 points] All the users reported back from their experience with the system and are not happy. U1 wants you to guarantee that by the time they send their next request, the previous U1’s request has been fully processed. U2 and U3 want the same thing for their own requests. Devise a scheduling strategy to meet the new demand and demonstrate that it works always! Use the provided grid only if strictly necessary.
Problem 5
You are writing a cryptographic library and at the heart of it you have to perform a random shuffle over the items of an array. For simplicity, consider an array of just 4 elements. Your library uses N threads and it is important that the array is correctly shuffled without becoming corrupted. The code should shuffle the array by performing K swaps between random elements. After K swaps, all the threads should remain blocked (the parent will unblock them later as needed). Your colleague has provided a first draft of the code that will be executed in parallel by each of the N threads in Listing 3.
(a) [5 points] You have noticed that often times, the array is incorrectly shuffled, meaning that some elements are lost and others appear repeatedly in the array. Complete the blanks in the code above to solve the issue in a simple way, even if that means sacrificing performance. Not all blanks might need to be filled. Use only one statement per blank. Remember: all the N processes must remain blocked (as mentioned above) at the end of K swaps.
(b) [8 points] Your colleague says that your solution compromises performance too much. So, they have written a version that performs array shuffles with better parallelism in Listing 5. While testing with N = 4 threads on a single-processor system, you have found that a deadlock might happen. Focus ONLY on lines 19–22 and use the table below to provide the progress interleaving that illustrates the problem. When completing the table, state the values taken by pos1 and pos2 explicitly at steps 1–8.
(c) [5 points] As usual, you try to solve the issue. This time, with an eye on achieving decent parallelism. You slightly tweak the code as shown in Listing 6. The only change is the addition of lines 22 to 25. Does the tweak solve the problem for the case N = 4? Briefly explain why or why not. NOTE: here, simply guessing YES or NO without a valid explanation yields no points.
Problem 6
Consider as system with 5 processes P1, . . . P5 sharing 3 resources R1, . . . , R3.
(a) [5 points] The immutable system parameters are provided in Table 1, while the state of the system at a given point in time is provided in Table 2. Complete the tables with the missing parameters.
(b) [6 points] Determine if the current state would be deemed safe by the Banker’s Algo-rithm for deadlock avoidance. Provide your reasoning for full marks.
(c) [7 points] While the system is in the state described by Table 1 and 2, P3 submits an allocation request for 0 units of R1, 1 unit of R2, and 0 unit of R3. Can the request be safely granted? You may use Table 5 to answer the question.