scott
2007-01-10 15:37:09 UTC
I'm looking to develop a cross platform (Windows / OSX / Linux)
project. It will require two threads both of which will be very
processor time intensive. There will be a need for some data sharing
between the two threads. Now, due to the intensive processor usage,
on a multi core system I'd like to take advantage of the extra cores.
I'm familiar with multithreaded programming on single core systems, I
do not know how that may apply to a multi core system (seeing as I
work in the desktop area which has only recently started to see multi
core systems). How can I be sure I'm taking advantage of extra cores
on a system that has one?
Do I continue with what I would consider regular threading, which can
share a memory space between threads and data sharing is controlled
with mutexes, etc? I'm not familiar enough with real parallel
computing to know whether or not a mutex works just as well at locking
data shared by threads on different cores as it does on a single core
solution. In fact, I'm not even sure that a single process can start
threads on different cores.
Do I need some other solution, whereby my threads are separate
processes in themselves and therefore may not have a shared memory
space, instead sharing data through some other means?
In either instance I would want the power to decide and force the two
tasks onto separate cores. Whilst I appreciate that much of the time
you may want the OS to determine what core to put a process/thread on
when it's started, I will know beforehand what the processor loading
is. Additionally, the software will be run on a closed, clearly
defined system, so I will not need to worry about what other loads may
be placed upon it by other users.
Any advice and insight would be much appreciated, thanks.
project. It will require two threads both of which will be very
processor time intensive. There will be a need for some data sharing
between the two threads. Now, due to the intensive processor usage,
on a multi core system I'd like to take advantage of the extra cores.
I'm familiar with multithreaded programming on single core systems, I
do not know how that may apply to a multi core system (seeing as I
work in the desktop area which has only recently started to see multi
core systems). How can I be sure I'm taking advantage of extra cores
on a system that has one?
Do I continue with what I would consider regular threading, which can
share a memory space between threads and data sharing is controlled
with mutexes, etc? I'm not familiar enough with real parallel
computing to know whether or not a mutex works just as well at locking
data shared by threads on different cores as it does on a single core
solution. In fact, I'm not even sure that a single process can start
threads on different cores.
Do I need some other solution, whereby my threads are separate
processes in themselves and therefore may not have a shared memory
space, instead sharing data through some other means?
In either instance I would want the power to decide and force the two
tasks onto separate cores. Whilst I appreciate that much of the time
you may want the OS to determine what core to put a process/thread on
when it's started, I will know beforehand what the processor loading
is. Additionally, the software will be run on a closed, clearly
defined system, so I will not need to worry about what other loads may
be placed upon it by other users.
Any advice and insight would be much appreciated, thanks.