AnandTech | Intel’s Core 2011 Mobile Roadmap Revealed: Sandy Bridge Part II

AnandTech | Intel’s Core 2011 Mobile Roadmap Revealed: Sandy Bridge Part II.

Late last week we pulled back the covers on Intel’s next-generation Core architecture update: Sandy Bridge. Due out in Q1 2011, we learned a lot about Sandy Bridge’s performance in our preview. Sandy Bridge will be the first high performance monolithic CPU/GPU from Intel. Its performance was generally noticeably better than the present generation of processors, both on the CPU and GPU side. If you haven’t read the preview by now, I’d encourage you to do so.

One of the questions we got in response to the article was: what about Sandy Bridge for notebooks? While Sandy Bridge is pretty significant for mainstream quad-core desktops, it’s even more tailored to the notebook space. I’ve put together some spec and roadmap information for those of you who might be looking for a new notebook early next year.

Mobile Sandy Bridge

Like the desktop offering, mobile Sandy Bridge will arrive sometime in Q1 of next year. If 2010 was any indication of what’s to come, we’ll see both mobile and desktop parts launch at the same time around CES.

The mobile Sandy Bridge parts are a little more straightforward in some areas but more confusing in others. The biggest problem is that both dual and quad-core parts share the same brand; in fact, the letter Q is the only indication that the Core i7 2720QM is a quad-core and the Core i7 2620M isn’t. Given AMD’s Bulldozer strategy, I’m sure Intel doesn’t want folks worrying about how many cores they have – just that higher numbers mean better things.

Mobile Sandy Bridge CPU Comparison
Base Frequency L3 Cache Cores / Threads Max Single Core Turbo Memory Support Intel Graphics EUs Intel HD Graphics Frequency / Max Turbo TDP
Core i7 2920XM 2.5GHz 8MB 4 / 8 3.5GHz DDR3-1600 12 650 / 1300MHz 55W
Core i7 2820QM 2.3GHz 8MB 4 / 8 3.4GHz DDR3-1600 12 650 / 1300MHz 45W
Core i7 2720QM 2.2GHz 6MB 4 / 8 3.3GHz DDR3-1600 12 650 / 1300MHz 45W
Core i7 2620M 2.7GHz 4MB 2 / 4 3.4GHz DDR3-1600 12 650 / 1300MHz 35W
Core i5 2540M 2.6GHz 3MB 2 / 4 3.3GHz DDR3-1333 12 650 / 1150MHz 35W
Core i5 2520M 2.5GHz 3MB 2 / 4 3.2GHz DDR3-1333 12 650 / 1150MHz 35W

You’ll notice a few changes compared to the desktop lineup. Clock speeds are understandably lower, and all launch parts have Hyper Threading enabled. Mobile Sandy Bridge also officially supports up to DDR3-1600 while the desktop CPUs top out at DDR3-1333 (though running them at 1600 shouldn’t be a problem assuming you have a P67 board).

The major difference between mobile Sandy Bridge and its desktop countpart is all mobile SB launch SKUs have two graphics cores (12 EUs), while only some desktop parts have 12 EUs (it looks like the high-end K SKUs will have it). The base GPU clock is lower but it can turbo up to 1.3GHz, higher than most desktop Sandy Bridge CPUs. Note that the GPU we tested in Friday’s preview had 6 EUs, so mobile Sandy Bridge should be noticeably quicker as long as we don’t run into memory bandwidth issues. Update: Our preview article may have actually used a 12 EU part, we’re still trying to confirm!

Even if we only get 50% more performance out of the 12 EU GPU, that’d be enough for me to say that there’s no need for discrete graphics in a notebook – as long as you don’t use it for high-end gaming.

While Arrandale boosted multithreaded performance significantly, Sandy Bridge is going to offer an across the board increase in CPU performance and a dramatic increase in GPU performance. And from what I’ve heard, NVIDIA’s Optimus technology will work with the platform in case you want to do some serious gaming on your notebook.

AnandTech | Farewell to ATI, AMD to Retire the ATI Brand Later this Year

 

AnandTech | Farewell to ATI, AMD to Retire the ATI Brand Later this

Year.Four years ago AMD did the unthinkable: it announced the 5.4 billion dollar acquisition of ATI in a combination of cash and stock. What followed was a handful of very difficult years for AMD, an upward swing for ATI and the eventual spinoff of AMD’s manufacturing facilities to GlobalFoundries in order to remain profitable and competitive.

In the years post acquisition, many criticized AMD for blowing a lot of money on ATI and having little to show for it. Even I felt that for $5.4 billion AMD could’ve put together its own competent graphics and chipset teams.

Despite the protest and sideline evaluations, good has come from the acquisition. The most noticeable is the fact that AMD’s chipset business is the strongest it has ever been. AMD branded chipsets and integrated graphics are actually very good. And later this year, AMD will ship its first Fusion APUs (single die CPU/GPU): Ontario using Bobcat cores and an AMD GPU. Ontario will be the first tangible example of direct AMD/ATI collaboration since the acquisition.

Just as we’re about to see results from the acquisition AMD is announcing that it will retire the ATI brand later this year. Save those boxes guys, soon you won’t see an ATI logo on any product sold in the market.

The motivation behind the decision to retire the ATI brand comes from AMD’s own internal research. Unfortunately AMD isn’t sharing the details of this research, just the three major findings from it:

1) AMD brand preference triples when the person surveyed is aware of the ATI-AMD merger.
2) The AMD brand is viewed as stronger than ATI when compared to graphics competitors (presumably NVIDIA).
3) The Radeon and Fire Pro brands themselves (without ATI being attached to them) are very high as is.

The second point is really the justification for all of this. If AMD’s internal research is to be believed, AMD vs. NVIDIA is better from a marketing standpoint than ATI vs. NVIDIA. Honestly, AMD’s research seems believable. AMD has always seemed like a stronger brand to me than ATI. There’s little room for ego in business (despite it being flexed all too often) and I don’t believe AMD would hurt its marketing simply to satisfy any AMD executives – the research makes sense.

Meanwhile the third point is the realization that there are very few product lines with the ATI brand left. ATI’s chipset operations were quickly absorbed in to AMD and given appropriate naming, while ATI’s consumer electronics products such as their Digital TV division have been sold to other companies. Radeon and FirePro are the only two ATI product lines left, and both are strong brands on their own.

The brand switch also reflects some internal changes at AMD. Many important ATI employees have been relocated to AMD’s base of operations in Austin, Texas in order to help with Liano, Ontario, and AMD’s future Fusion products. So the line between AMD and ATI has been further blurring for some time.

The brand switch will start late this year, I’d guess in Q4 with Ontario and a new GPU release. AMD (and NVIDIA) originally had GPU designs for the 32nm process node however extensive teething problems with 40nm and 32nm forced TSMC to cancel the node and move directly to 28nm. This cancellation required both companies to redesign their parts to work within existing 40nm processes and move their original plans out to coincide with 28nm in 2011. As a result we will see an incremental update to the Radeon HD 5000 series at the end of this year, but don’t expect the sort of performance boost we got with the 5800 vs. 4800. This upcoming hardware will probably carry the AMD Radeon HD 6000 series brand. All existing hardware will continue to carry the ATI brand.

To go along with the new brand we get new logos. If OEMs want to display a badge without the AMD brand, there’s an alternative for that as well:

AMD states the AMD-less logos are purely at the request of OEMs who sell systems with Intel CPUs and AMD GPUs. I suspect Intel’s logo program may have some stipulations on being used adjacent to a sticker with an AMD logo on it, although AMD told me it was purely at the request of the OEMs trying to avoid confusion.

The other major change is AMD’s brand simplification at the retail level. Last year AMD introduced a new platform brand called Vision. If you buy a PC with all AMD components (CPU, chipset and GPU) it can carry a Vision logo (similar to Intel’s Centrino brand). There are four categories of Vision support all with increasing hardware requirements: Vision, Vision Premium, Vision Ultimate and Vision Black. The idea is that if you buy a standard Vision PC you’ll have a good entry level machine, but buying up the stack grants you additional capabilities and performance (e.g. Blu-ray playback, web cam support, discrete GPUs, multicore CPUs etc…). We’ve explained it all in greater detail here.

Starting next year, AMD’s Vision badge will be the only CPU brand you see on retail desktops/notebooks. You’ll still get Radeon/Fire Pro badges on systems that use those parts, but you’ll no longer see a Phenom II, Athlon II, Turion or Sempron logo on Vision systems. Instead you’ll see what CPU is inside on the little card that sits next to the system at your local retailer.

I suspect this will last until AMD introduces Bulldozer, at which point it’ll probably be very eager to build up its brand – assuming performance its is competitive.

Final Words

Retiring the ATI brand comes at an interesting time in the microprocessor market. Graphics is becoming much more important, but to date we have very few examples outside of 3D games as good consumer applications for powerful GPUs. AMD views this as the perfect time to consolidate its brands before the CPU/GPU line gets more blurry.

AMD also pointed out that its market share has been on a steady climb over the past few years. According to Mercury Research, AMD’s discrete GPUs climbed from ~33% marketshare at the end of 2007 to 51% last quarter. AMD has executed unusually well on the GPU side and NVIDIA has had some very difficult years in the process, both of which are responsible for AMD’s climb. The ATI name will go out on a high note.


AMD Discrete GPU Marketshare, Source: Mercury Research

If all goes well with AMD’s two exciting new CPU architectures next year, the brand will only get stronger going forward. Bobcat could do very well in today’s netbook/thin and light notebook form factors and Bulldozer may mark a return to competition in the server and high end desktop markets.

DailyTech – Intel Acquires Infineon Wireless, Promises Not to Kill ARM Support

 

DailyTech - Intel Acquires Infineon Wireless, Promises Not to Kill ARM Support

DailyTech – Intel Acquires Infineon Wireless, Promises Not to Kill ARM Support.

Move may mark the start of the x86 smart phone invasion, though

As previously reported, Intel has been pursuing an acquisition of Infineon Technologies AG’s wireless unit.  Infineon AG, spun-off in 1999 from Siemens AG, has seen lots of recent business making broadband signal processing chips for numerous Android smartphones and for the iPhone.

The deal is now official.  Intel plans to close the deal by calendar Q1 2011.  It will purchase Infineon’s wireless unit, WLS, for $1.4B USD in cash (a vastly smaller sum than its recent $7.68B USD acquisition of the world’s top antivirus software maker, McAfee).

The deal does not include much of Infineon’s R&D or fabrication business.  It also does not cover the company’s ARM CPU offerings, which it’s hoping will soon gain traction in Android smartphones.

The move gives Intel a mobile wireless communications platform, which it can potentially employ with Atom platform x86 processors as part of a system-on-a-chip (SoC) solution for smartphones.  That SoC package could also employ hardware-security using intellectual property from McAfee.

Currently there are no smartphones on sale with x86 processors (current smartphones use the alternative ARM architecture).  Intel hopes to soon change that, and the assets from Infineon help prepare it for its upcoming battle in the smartphone sector.

Intel promises to play fair, though and to continue to support ARM customers like Apple.  The company’s press release states, “WLS will operate as a standalone business. Intel is committed to serving WLS’ existing customers, including support for ARM-based platform.”

The acquisition could also help Intel add wireless 3G or 4G connections to its netbook chipsets.  Infineon and Intel’s press release indicates that they are currently gunning for WiMAX as the 4G (fourth generation wireless) technology of choice.  Sprint, the first carrier in the U.S. to deploy a widespread 4G network uses WiMAX, but the nation’s top network Verizon is betting on LTE for its 4G effort.  Infineon has also looked into LTE technology in the past.

AnandTech | Intel’s 50Gbps Silicon Photonics Link: The Future of Interfaces

On Tuesday, Intel demonstrated the world’s first practical data connection using silicon photonics – a 50 gigabit per second optical data connection built around an electrically pumped hybrid silicon laser. They achieved the 50 gigabit/s data rate by multiplexing 4 12.5 gigabit/s wavelengths into one fiber – wavelength division multiplexing. Intel dubbed its demo the “50G Silicon Photonics Link.”

Fiber optic data transmission isn’t anything new – it’s the core of what makes the internet as we know it today possible. What makes Intel’s demonstration unique is that they’ve fabricated the laser primarily out of a low-cost, mass-produceable, highly understood material – silicon.

For years, chip designers and optical scientists alike have dreamt about the possibilities of merging traditional microelectronics and photonics. Superficially, one would expect it to be easy – after all, both fundamentally deal with electromagnetic waves, just at different frequencies (MHz and GHz for microelectronics, THz for optics).

On one side, microelectronics deals with integrated circuits and components such as transistors, copper wires, and the massively understood and employed CMOS manufacturing process. It’s the backbone of microprocessors, and at the core of conventional computing today. Conversely, photonics employs – true to its name – photons, the basic unit of light. Silicon photonics is the use of optical systems that use silicon as the primary optical medium, instead of other more expensive optical materials. Eventually, photonics has the potential to supplant microelectronics with optical analogues of traditional electrical components – but that’s decades away.

Until recently, successfully integrating the two was a complex balance of manufacturing and leveraging photonics only when it was feasible. Material constraints have made photonics effective primarily as a long haul means of getting data from point to point. To a larger extent, this has made sense because copper traces on motherboards have been fast enough, but we’re getting closer and closer to the limit.

DailyTech – Nanotechnology Delivers Revolutionary Pumpless Water Cooling

Forget traditional metal block coolers a nanowick could remove 10 times the heat of current chip designs

A collaboration of university researchers and top industry experts has created a pumpless liquid cooling system that uses nanotechnology to push the limits of past designs.

One fundamental computing problem is that there are only two ways to increase computing power — increase the speed or add more processing circuits.  Adding more circuits requires advanced chip designs like 3D chips or, more traditionally, die shrinks that are approaching the limits of the laws of physics as applied to current manufacturing approaches.  Meanwhile, speedups are constrained by the fact that increasing chip frequency increases power consumption and heat, as evidence by the gigahertz war that peaked in the Pentium 4 era.

A team led by Suresh V. Garimella, the R. Eugene and Susie E. Goodson Distinguished Professor of Mechanical Engineering at Purdue University, may have a solution to cooling higher frequency chips and power electronics.  His team cooked up a bleeding edge cooler consisting of tiny copper spheres and carbon nanotubes, which wick coolant passively towards hot electronics.

The coolant used is everyday water, which is transferred to an ultrathin “thermal ground plane” — a flat hollow plate.

The new design can handle an estimated 10 times the heat of current computer chip designs.  That opens the door to higher frequency CPUs and GPUs, but also more efficient electronics in military and electric vehicle applications.


The new design can wick an incredible 550 watts per square centimeter.  Mark North, an engineer with Thermacore comments, “We know the wicking part of the system is working well, so we now need to make sure the rest of the system works.”

The design was first verified with computer models made by Gamirella, Jayathi Y. Murthy, a Purdue professor of mechanical engineering, and doctoral student Ram Ranjan.  Purdue mechanical engineering professor Timothy Fisher’s team then produced physical nanotubes to implement the cooler and test it in an advanced simulated electronic chamber.

Garimella describes this fused approach of using computer modeling and experimentation hand in hand, stating, “We have validated the models against experiments, and we are conducting further experiments to more fully explore the results of simulations.”

Essentially the breakthrough offers pumpless water-cooling, as the design naturally propels the water.  It also uses microfluidics and advanced microchannel research to allow the fluid to fully boil, wicking away far more heat than similar past designs.

This is enabled by smaller pore size than previous sintered designs.  Sintering is fusing together tiny copper spheres to form a cooling surface.  Garimella comments, “For high drawing power, you need small pores.  The problem is that if you make the pores very fine and densely spaced, the liquid faces a lot of frictional resistance and doesn’t want to flow. So the permeability of the wick is also important.”

To further improve the design and make the pores even smaller the team used 50-nm copper coated carbon nanotubes.

The research was published in this month’s edition of the peer-reviewed journal International Journal of Heat and Mass Transfer.

Raytheon Co. is helping design the new cooler.  Besides Purdue, Thermacore Inc. and Georgia Tech Research Institute are also aiding the research, which is funded by a Defense Advanced Research Projects Agency (DARPA) grant.  The team says they expect commercial coolers utilizing the tech to hit the market within a few years.  Given that commercial cooling companies (Thermacore, Raytheon) were involved, there’s credibility in that estimate.

DailyTech – “Proof” That Linux Project Ripped Off Unix Code Released

Leak from biased source obviously will draw skepticism from the open-source community

IBM, which is among the largest firms pushing the open-source Linux operating system, was slammed with a $1B USD lawsuit in 2003 from SCO, one of the owners of a Unix distribution. The lawsuit alleged that IBM ripped off Linux code from the Unix codebase and was “devaluing” it.

The damages eventually swelled to $5B USD, but SCO was defeated when Novell was shown to hold most of the applicable Unix intellectual property and Novell waived the case.  In the end, SCO filed for bankruptcy, and the Novell loss resulted in a ruling that SCO owes Novell $2.35M USD for copyright infringements (a total later bumped to $3.4M USD).

Even as SCO is appealing [PDF] that decision, Kevin McBride, a lawyer and brother of former SCO CEO Darl McBride has released [see comments section] a wealth of documents showing some of the code that SCO claimed IBM’s Linux ripped off.

He writes:

While UNIX ownership rights are still not finally settled (pending SCO’s appeal of Novell’s jury victory in March, 2010) it is certainly my view, after careful review of all these issues, that Linux DOES violate UNIX copyrights, particularly in ELF code and related tools (debugger code, etc.), header file code wherein implementation code (not just the header interface) have been copied verbatim; STREAMS code; etc. that the Linux community use without license. Then there is the entire question of the overall structure and sequence of Linux being almost an exact copy of UNIX.

There should be little question by anyone at this point that Linux uses a LOT of UNIX code. The Linux world thinks that use is permissive. SCO disagreed. That is the only real issue to be discussed here.

Will Novell win the current SCO appeal? Probably. Will Novell donate the UNIX copyrights to the Linux community if it wins the current appeal? Probably–although Novell’s Linux activities have been difficult to predict in recent years. But does Linux violate UNIX copyrights? Yes.

So, in my opinion, Linux users owe Novell–and particularly its excellent Morrison & Forrester legal team–a huge debt for coming to the rescue and keeping Linux a royalty-free product.

And follows up:

SCO submitted a very material amount of literal copying from UNIX to Linux in the SCO v. IBM case. For example, see the following excerpts from SCO’s evidence submission in Dec. 2005 in the SCO v. IBM case:

Tab 422Tab 421Tab 420Tab 419Tab 418Tab 417Tab 416Tab 415Tab 414Tab 413Tab 412Tab 411Tab 410Tab 409Tab 333Tab 332Tab 331;Tab 330Tab 329Tab 255Tab 254Tab 253Tab 252Tab 251Tab 250Tab 249Tab 248Tab 247Tab 246Tab 245Tab 244Tab 243Tab 242Tab 241;Tab 240Tab 239Tab 238Tab 237Tab 236Tab 235Tab 234Tab 233Tab 232Tab 231Tab 230Tab 229.

There was MUCH more submitted in the SCO v. IBM case that I cannot disclose publicly because it is comparison of code produced by IBM under court protective order that prohibits disclosure.

But the court in SCO v. IBM will probably never decide whether use of this (and all the other UNIX code) in Linux was, or was not permissive, because in the SCO v. Novell case, the jury decided in March 2010 that Novell owns the UNIX copyrights, not SCO.

As I mentioned in the reply to Andreas, if you Linux guys want to give credit where credit is due, you should all thank Novell for having the courage to take the case all the way to trial (I thought SCO had a much stronger case the on ownership question) and its legal counsel, Morrison & Forrester, for doing an outstanding job for Novell at trial–Michael Jacobs, Eric Acker and Sterling Brennan.

In case those links no longer work, you can also get a collected archive of the PDFs here.

Looking briefly at the code involved some of it indeed appears to be copied and pasted, or at least designed using common design documents.  The fact that so many named variables match up would certainly indicate that.  However, the order of the code has been rearranged and there have been numerous deletions and insertions in these sections.

Further, some of the segments of code included are pretty generic.  In these cases it is harder to tell whether the code was indeed copied as claimed, or just implemented similarly.

Ultimately, whether the code was copied or not may prove a moot point, as the jury trial resoundingly declared Novell to own the Unix code.  And Novell is not interested in suing IBM at the present.  Unless SCO’s appeal, filed in U.S. Federal 10th Circuit on July 7, 2010 succeeds, this leak may merely prove an interesting footnote in this case, which is of extreme importance to the open source movement.

AnandTech | Western Digital’s New VelociRaptor VR200M: 10K RPM at 450GB and 600GB

Truth be told I haven’t had a mechanical hard drive on my test bench since shortly after the X25-M reviewback in 2008. Once the major hiccups that faced SSDs were dealt with, I switched all of my testbeds over. I got more consistent benchmarks, better performance and since I was using the X25-Ms, better reliability.

A week ago Western Digital wrote me and asked if I had any interest in covering hard drives. I’d been planning on building out a HDD addition to our live benchmark comparison engine, so I was definitely interested. It’s not that I had forgotten about mechanical storage, it’s that nothing exciting had happened there in a while.

It was 2003 when WD introduced its first 10,000 RPM desktop ATA hard drive – the Raptor. After 5 years of incremental updates, we saw the first major change in 2008 with the VelociRaptor. Western Digital moved to a 2.5″ form factor mounted to a 3.5″ heatsink. The smaller platters meant read/write heads had less distance to travel, which reduced access times. It also meant lower power consumption, something that would matter in the enterprise world. Before I made the switch to SSDs, the VelociRaptor was our testbed hard drive of choice. It was the fastest thing money could buy. But that was 2008. Since then even regular 7200RPM drives have been able to catch up to WD’s dinosaur.

Despite releasing its first mainstream SSD, Western Digital is still committed to hard drive manufacturing. The cost per GB of even the cheapest SSDs are still far higher than the fastest hard drives, and thus there’s room for newer, faster hard drives. The past couple of years have seen capacities go way up. Western Digital and Seagate both ship 2TB drives, and both of these drives are arguably just as fast as the original VelociRaptor still stuck at its 300GB capacity. That all changes today. This is the new VelociRaptor VR200M:

Available in 450GB and 600GB capacties ($299 and $329), the new VelociRaptor picks up where the old one left off. It’s still a 2.5″ drive with an optional 3.5″ heatsink (called the IcePAK, standard on all drives sold in the channel) that’ll keep it cool and let it mount easily in a 3.5″ bay. The 2.5″ drive measures 15mm in height, so you can’t use it in most notebooks in case you were wondering.

WD increased platter density from 150GB to 200GB, which results in higher sequential transfer rates and lower track to track seek times (0.75ms down to 0.4 ms). Average seek time remains unchanged at 3.6ms thanks to the drive’s 10,000 RPM spindle speed. The buffer moves up to 32MB from 16MB. Just like the old VelociRaptor, WD has chosen not to outfit this new drive with its largest buffer (64MB currently shipping on the Caviar Black drives).

Specifications
  WD VelociRaptor
VR200M
WD VelociRaptor
VR150M
Capacity 600GB/450GB 300GB/150GB
Interface SATA 6 Gb/s SATA 3 Gb/s
Rotational Speed 10,000 RPM 10,000 RPM
Buffer Size 32MB 16 MB
Track to Track Seek 0.4 ms 0.75 ms
Average Seek Time 3.6 ms 3.6 ms
Full Stroke Seek 8.5 ms (typical) 8.5 ms (typical)
Transfer Rate
Buffer to Disk
145 MB/s 128 MB/s
Platter Density 200GB per platter 150GB per platter
Warranty 5 – Years 5 – Years
.

The on-board controller is WD’s latest dual-core design. I don’t have much information about it but I’m guessing that because drive management is getting more complex, the controllers must scale up in complexity as well. The drive supports 6Gbps SATA, however you see no performance benefit from it (in fact, in many cases it’s actually slower than 3Gbps SATA if you’ve got a good integrated SATA controller).

Western Digital claims to have increased the number of head load/unload cycles the new VelociRaptor can withstand. The drive heads must be positioned over the rotating platters in order to read/write data. When they aren’t in use, the heads are retracted (or unloaded) to prevent any accidental damage to the platters and thus your data. The old 300GB VelociRaptor was rated for 50,000 load/unload operations. The new VR200M? 600,000.