NO.

Awww shucks! and there was I thinking that we had finally solved the problem of perpetual motion

we see another phenomenon: coding practice not keeping up with hardware creation. Programmers simply cannot harness multiple CPUs well at present (as an observation of general practices on the average.)
I think that is the key issue, and it has been around in one shape or another for quite a long time.

I recall back in the days of the first P4 (single core) processors. I had acquired a dual processor PIII machine from Design Engineering. My colleague was surprised that I had this rather than the (then) new P4/1.5Ghz machine that he had. Actually I think that he was jealous that mine came with a 21" Sony monitor and a professional graphics card

He commented that although my two processors added up to more than his single processor, I would not see any advantage as the software wasn't written to take advantage of twin processors.

There were two basic flaws in this argument:

1. The design software that I was supporting was intended to take advantage of twin processors, even though the standard office and development applications that we used were not.

2. There is a difference between a PIII Xeon and a P4 of that era (like the 1Mb cache?)

I feel that backwards compatibility still takes precedent in a lot of applications development?