This shows you the differences between two versions of the page.
dss:laboratoare:05 [2019/06/26 14:38] eduard.staniloiu |
dss:laboratoare:05 [2019/06/26 16:12] (current) eduard.staniloiu [Exercises] |
||
---|---|---|---|
Line 458: | Line 458: | ||
==== Exercises ==== | ==== Exercises ==== | ||
+ | |||
+ | The lab can be found at this [[https://github.com/RazvanN7/D-Summer-School/tree/master/lab-05|link]]. | ||
+ | |||
+ | === 1. Parallel programming === | ||
+ | |||
+ | Navigate to the 1-parallel directory. Read and understand the source file students.d. Compile and run the program, and explain the behaviour. | ||
+ | |||
+ | - What is the issue, if any. | ||
+ | - We want to fix the issue, but we want to continue using **Task**s. | ||
+ | - Do we really have to manage all of this ourselves? I think we can do a better **parallel** job. | ||
+ | - Increase the number of students by a factor of 10, then 100. Does the code scale? | ||
+ | |||
+ | === 2. Getting functional with parallel programming === | ||
+ | |||
+ | Navigate to the 2-parallel directory. Read and understand the source file students.d. | ||
+ | |||
+ | - The code looks simple enough, but always ask yourselves: can we do better? Can we change the **foreach** into a oneliner? | ||
+ | - Increase the number of students by a factor of 10, then 100. Does the code scale? | ||
+ | - Depending on the size of our data, we might gain performance by tweaking the **workUnitSize** parameter. Lets try it out. | ||
+ | |||
+ | === 3. Heterogeneous tasks === | ||
+ | |||
+ | Until now we've been using **std.parallelism** on sets of homogeneous tasks. | ||
+ | Q: What happens when we want to perform parallel computations on distinct, unrelated tasks? | ||
+ | A: We can use [[https://dlang.org/phobos/std_parallelism.html#.TaskPool|taskPool]] to run our task on a pool of worker threads. | ||
+ | |||
+ | Navigate to the 3-taskpool directory. Write a program that performs three tasks in parallel: | ||
+ | - One reads the contents of **in.txt** and writes to stdout the total number of lines in the file | ||
+ | - One calculates the average from the previous exercise | ||
+ | - One does a task of your choice | ||
+ | |||
+ | To submit tasks to the **taskPool** use [[https://dlang.org/phobos/std_parallelism.html#.TaskPool.put|put]]. | ||
+ | <note> | ||
+ | Don't forget to wait for your tasks to finish. | ||
+ | </note> | ||
+ | |||
+ | === 4. I did it My way === | ||
+ | |||
+ | Let's implement our own concurrent **map** function. | ||
+ | Navigate to the 4-concurrent-map directory. Starting from the serial implementation found in **mymap.d** modify the code such that | ||
+ | the call to **mymap** function will execute on multiple threads. You are required to use the **std.concurrency** module for this task. | ||
+ | |||
+ | Creating a thread implies some overhead, thus we don't want to create a thread for each element, but rather have a thread process chunks of elements; basically we need a **workUnitSize**. | ||
+ | |||
+ | === 5. Don't stop me now === | ||
+ | |||
+ | Since we just got started, let's implement our our concurrent **reduce** function. **reduce** must take the initial accumulator value as it's first parameter, and then the list of elements to reduce. | ||
+ | |||
+ | <note> | ||
+ | Be careful about those race conditions. | ||
+ | </note> | ||
+ | |||
+ | === 6. Under pressure === | ||
+ | |||
+ | The implementations we did at ex. 4 and ex. 5 are great and all, but they have the following shortcoming: they will each spawn a number of threads (most likely equal to the number of physical cores), so calling them both in parallel will spawn twice the amount of threads that can run in parallel. | ||
+ | |||
+ | Change your implementations to use a thread pool. The worker threads will consume jobs from a queue. The map and reduce implementations will push job abstractions into the queue. | ||
+ | |||
+ | Now we're talking! |