I read something the other day:
However, say the tasks aren't completely independent. Let's assume that some of the tasks refer to the same memory space, some of the tasks require information from the results of the other tasks to actually start their own operation. In that case, even if we do have an equal number of processors to the number of tasks at hand, all of them will definitely not be used, which would equate to a wastage of resources.
Which brings me to my question,
Any CSE Gods here care to explain??
P.S. Just discovered a strange quirk of the Blogger editor...Apparently you cannot enclose m inside of HTML tags.
"In an ideal world, we would have 'm' parallel tasks and 'm' processors. But since we are not living in the ideal world, we usually have 'm' parallel tasks and 'n' processors where 'n'<'m'. In the former case, we would complete 'm' tasks in t time, where t is the time that a single task will take for completion."I don't know why this statement got me thinking, but there seems to be something wrong with it, and I can't seem to put my finger (literally!) on to it. The proverbial "bone of contention" with this statement seems to be a basic assumption which is usually false in the real-world scenario; the assumption being that all the m parallel tasks being spoken of here are completely independent - as in the memory they access, the buses they send their data on, the processors they use are all different from each other - only then would one get m times the performance of a single-processor system.
However, say the tasks aren't completely independent. Let's assume that some of the tasks refer to the same memory space, some of the tasks require information from the results of the other tasks to actually start their own operation. In that case, even if we do have an equal number of processors to the number of tasks at hand, all of them will definitely not be used, which would equate to a wastage of resources.
Which brings me to my question,
"Exactly where on the Performance v/s Utilization line is the trade-off between the two deemed acceptable?"Of course, the trade-off point (if I might be allowed to say so) will vary from application to application, depending on the required response time of each application and a million other factors (not literally). So, a better way to phrase the question would be "Is there some way to quantify a minimum ratio (maybe not the right term) or a worst-case relationship between Performance and Resource Utilization? In essence, is there a guarantee that while my performance won't suffer much, my resources will still have to work a minimum amount of time?"
Any CSE Gods here care to explain??
P.S. Just discovered a strange quirk of the Blogger editor...Apparently you cannot enclose m inside of HTML tags.
3 Responses:
Nice one. Thought that once. No ideas though.
where is the context ?!?! wtf?
I would assume that by definition a "parallel task" is not something which shares lines. For e.g take your windows vista install running winAMP as well as IE . IE can run on 1 core and winAMP can run on another. They will together run twice as fast as a normal single core machine.
"Exactly where on the Performance v/s Utilization line is the trade-off between the two deemed acceptable?"
WTF?!? Is this some business briefing ?
The point is not about utilisation/performance. Thep point is that some calculations take 5 years on a normal machine and the *ONLY* way for them to run is via parallel computing. there is no utilisation/performance benchmark where you would say it is acceptable.
I think I'll have to again come to you for getting some CP II lessons because your posts are slowly going beyond my comprehension ! Translation required.
Post a Comment