Monday, November 3, 2014

Topre's Type Heaven mechanical keyboard reviewed

MECHANICAL KEYBOARDS HAVE ENJOYED something of a renaissance recently. Much of that revival can be attributed to Cherry's MX mechanical key switches, which have cropped up in all manners of clicky gaming keyboards—as well as more austere offerings designed for hardcore typists.
The Cherry MX switches aren't the only mechanical ones around, though. For many years, discerning users with ample budgets have splurged on Topre keyboards—high-priced, made-in-Japan offerings that feature a unique type of mechanical key switch made up of metal springs and rubber domes. Where most Cherry MX-based keyboards rarely venture far from the $100 mark, Topre offerings cost upward of $230.
Correction: they used to cost upward of $230.
Earlier this year, Topre introduced the Type Heaven, a keyboard that brings the firm's unique spring-and-rubber switches to a no-frills package with a less terrifying price tag. Right now, you can find the Type Heaven on sale for $150 at Amazon—not a huge step up from, say, the Cherry MX-based Das Keyboard Model S, which sells for $139.
Topre had to cut a few corners to reach the lower price point, of course. The Type Heaven is manufactured in China rather than Japan, and it lacks some of the bells and whistles of its pricier brethren, such as distributed key switch weighting and the ability to re-map keys with hardware DIP switches. Also, the Type Heaven's key caps are made of a different type of plastic, and they're laser etched rather than printed using more durable dye sublimination.
Those sacrifices are small, though, and they've helped to make the Type Heaven an interesting—and competitive—alternative to high-end Cherry MX keyboards. The folks at EliteKeyboards.comwere kind enough to send us one to test, and I've spent the past little while banging away on it.
Rubber domes with a twist
Before we talk more about the Type Heaven, we should first explain what makes it special: those fancy Topre switches. The proper term for them is "electrostatic capacitive switches," and their operation is different from that of other mechanical switches like the Cherry MX series or even IBM's buckling springs.
According to Topre's original patent application, the electrostatic capacitive switch design combines a conical spring with a rubber dome, and it's actuated capacitively, without requiring the physical coupling of internal parts. What this means is that, when a key is depressed, the top end of the spring is pushed toward an electrode at the bottom until the capacitance reaches a certain threshold. At that threshold, the switch is actuated, and the rubber dome generates a "snap feeling" that gives the user some tactile feedback. The switch can then be pushed farther down until it bottoms out, or it can be allowed to spring back up to its resting position.
The patent application outlines an interesting rationale behind the design. It explains that conventional key switches need to be depressed "halfway down" to reach the actuation point. As a result, users may be inclined to bottom out in order to ensure that the switch is properly actuated. Over time, the patent application goes on to say, repeated impacts from bottoming-out can result in "inflammation of the tendon sheath." The patent calls such inflammation an "occupational disease" that provokes "social concern."
Topre's design purports to address this problem by putting the actuation point only 1-2 mm below the key's resting position—and by generating that aforementioned "snap feeling" to inform the user of a successful actuation. In theory, then, the user should have less of an incentive to bottom out, since he or she will need to push down only a small part of the way to actuate the switch and trigger the tactile bump. If Topre is to be believed, this should lead to less tendon sheath inflammation (and, I suppose, less social concern). More to the point, less bottoming out should mean less fatigue.
Topre filed its patent application way back in 1984, when IBM's buckling springs ruled the land. Big Blue's patent application indeed shows that buckling springs must be pushed down about half-way to be actuated. What of the Cherry MX switches that populate more modern mechanical keyboards? They aren't entirely dissimilar, as it turns out. This PDF on the Cherry website shows the pressure point ergonomic (brown) and linear (red and black) MX switches actuate at 2 mm out of a 4-mm travel distance. The pressure point click (a.k.a. blue) MX switch actuates at 2.25 mm out of 4 mm—even farther than the half-way point.
Neither the Topre patent nor the company's website quotes the exact actuation point for Topre switches. However, according to my measurements, the Type Heaven's keys actuate at roughly 1.5 mm, and they bottom out just after 4 mm. Actuation requires 45 g of force, which is the same as for Cherry's MX brown switches.
The Topres have another thing in common with the Cherry MX browns: both switch types provide tactile feedback without generating an audible click upon actuation. Discounting the different internal structures and different actuation points, these two switches—the Topres and Cherry browns—seem pretty comparable on paper. As we're about to see, though, they feel quite different.
Now, if you've ever used laptop-style keyboards with scissor switches or cheap desktop keyboards with rubber domes, you may be aware that those, like the Topres, require very little travel to actuate. However, because they also have a short travel distance, those switches bottom out very easily. That's precisely what Topre switches are designed to prevent. Rubber domes aggravate the problem with muddy response and a bouncy bottom-out point, which may encourage some users to push down even harder.
In short, Topre really may be on to something here, if the theory matches the reality. Let's find out if that's the case now.
[Continue Reading]

Heavyweight rematch: Gigabyte X79-UP4 vs. MSI X79A-GD45 Plus

INTEL'S ULTRA-HIGH-END DESKTOP PLATFORM got a shot in the arm from Ivy Bridge-E in September. This refresh delivered updated CPU cores, but it didn't bring any changes to the two-year-old LGA2011 platform. Intel didn't update the accompanying X79 Express chipset, which is why we didn't see a wave of new motherboards rolled out with Ivy-E. Asus' X79-Deluxe was the only fresh face at the time, and neither Gigabyte nor MSI has released anything since.
Part of Ivy-E's appeal is the fact that the chip is a drop-in replacement for its Sandy Bridge-based predecessor. Existing X79 boards should require no more than a firmware update to work with the latest processors. Gigabyte and MSI both have newish X79 models that we haven't tested, so we decided to have a little throwdown to see how they compare. In the black and grey trunks, we have the $220 Gigabyte X79-UP4. And, uh, also in black and grey trunks, we have the $250 MSI X79A-GD45 Plus.
These boards have been in the lab for a while now, and I still have trouble telling them apart at a glance. Closer inspection reveals plenty of differences, though.
Gigabyte's X79-UP4
We begin with Gigabyte's X79-UP4, which delivers a lot more extras than one might expect from an affordable X79 model.
The UP4 wears its black-and-grey aesthetic well. The circuit board's matte surface is especially sinister, and I really like the look of the heatsinks. At the very least, the monochrome motif shouldn't clash with other system components.
Zooming in on the socket gives us a better angle on the seven-phase power circuitry feeding the CPU. Each phase is powered by fancy electrical components from International Rectifier. The board has ferrite-core chokes and extra-beefy copper layers, too. We'd expect nothing less from an enthusiast-oriented motherboard.
As you can see, the socket area is a little crowded. The VRM heatsink and top PCIe x16 slot encroach from the north and south, respectively, while dual banks of DDR3 memory slots flank from the east and west. We can't check clearances for every hardware combination, but we can convey a few key measurements.
Like on most modern motherboards, the DIMM slots come closest to the socket. Beware of combining taller memory modules with oversized air coolers. Watch out for the PCIe slot, too; it's all up in the socket's business. At least the VRM cooler is short enough to stay out of the way.
The socket area is crowded in part because the board's ATX footprint has limited room for eight DIMM slots. (There are two slots for each of the processor's quad memory channels.) Gigabyte's decision to add a seventh expansion slot—one more than on the MSI board—also results in tighter clearances around the socket.
All four of the x16 slots get PCI Express 3.0 connectivity directly from the CPU. The first and last slots have x16 and x8 links, respectively. The middle two share an x16 link that can be split evenly between them or devoted solely to the third slot. Props to Gigabyte for putting enough space between the full-fat x16 slots to provide breathing room for dual-card configs. I'm even more impressed that the X79-UP4 can host four double-wide graphics cards, each one connected to the CPU. This board is officially approved for quad CrossFire and SLI configurations.
The rest of the PCIe slots stem from the X79 platform hub. Although the chip is limited to Gen2 connectivity, the older spec should provide sufficient bandwidth for the x1 slots and auxiliary peripheral controllers.
The X79's own peripheral payload is relatively weak. There's no built-in USB 3.0 connectivity, and 6Gbps SATA support is restricted to two of the six ports. Gigabyte provides some relief with a collection of Marvell controllers that adds four internal 6Gbps ports and two external ones. A pair of Fresco Logic controllers handles USB 3.0, providing two ports at the rear plus an internal header for two more.
Four USB 3.0 ports doesn't sound like a lot in the context of modern Haswell boards, but it's enough to handle more high-speed peripherals than most folks need to run simultaneously. The X79-UP4 is loaded with USB 2.0 ports for older devices with lower bandwidth requirements. It even sports a combo PS/2 port for the old-school clicky keyboard crowd.
Gigabyte earns two gold stars for populating the cluster with both common connector types for digital S/PDIF audio output. Bypassing the onboard DAC is the best way to get good sound out of integrated motherboard audio. Unfortunately, digital audio output is limited to stereo playback and surround-sound content with pre-encoded tracks. Music and movies should work great, but multi-channel game audio can't be encoded in real time. The drivers for the Realtek audio codec at least offer some virtual surround mojo that fakes multi-channel output for stereo devices.
The cushioned I/O shield pictured above is pretty awesome—there are no tiny slivers of metal to slice your fingers or get caught up in the ports. Little touches like this can make the building process much easier. Too bad Gigabyte made the front-panel connectors unnecessarily difficult to use.
The front-panel pins are nicely walled off, but there's no external block to simplify the wiring process. Each connector must be attached individually, which can be difficult to do inside a fully-loaded system. MSI and others employ an elegant solution that adds just pennies to the cost of the motherboard. Speaking of which, let's see what MSI's X79A-GD45 Plus has in store...
[Continue Reading]

Nvidia's GeForce GTX 750 Ti 'Maxwell' graphics processor

SO THIS IS different. I don't recall the last time a new GPU architecture made its worldwide debut in a lower-end graphics card—or if I do, I'm not about to admit I've been around that long. In my book, then, Nvidia's "Maxwell" architecture is breaking new ground by hitting the market first in a relatively affordable graphics card, the GeForce GTX 750 Ti, and its slightly gimpy twin, the GeForce GTX 750.
Don't let the "750" in those names confuse you. Maxwell is the honest-to-goodness successor to the Kepler architecture that's been the basis of other GeForce GTX 600 and 700 series graphics cards, and it's a noteworthy evolutionary step. Nvidia claims Maxwell achieves twice the performance per watt of Kepler, without the help of a new chip fabrication process. Given how efficient Kepler-based GPUs have been so far, that's a bold claim.
I was intrigued enough by Maxwell technology that I've hogged the spotlight from Cyril, who usually reviews video cards of this class for us. Having spent some time with them, I've gotta say something: regardless of the geeky architectural details, these products are interesting in their own right. If your display resolution is 1920x1080 or less—in other words, if you're like the vast majority of PC gamers—then dropping $150 or less on a graphics card will get you a very capable GPU. Most of the cards we've tested here are at least the equal of an Xbone or PlayStation 4, and they'll run the majority PC games quite smoothly without compromising image quality much at all.
Initially, I figured I'd try testing these GPUs with some popular games that aren't quite as demanding as our usual fare. However, I quickly learned these cards are fast enough that Brothers: A Tale of Two Sons and Lego Lord of the Rings don't present any sort of challenge, even with all of the image quality options cranked. Discerning any differences between the GPUs running these games would be difficult at best, so I was soon back to testing Battlefield 4 and Crysis 3.
Why should that rambling anecdote matter to you? Because if you're an average dude looking for a graphics card for his average computer so it can run the latest games, this price range is probably where you ought to be looking. I'm about to unleash a whole torrent of technical gobbledygook about GPU architectures and the like, but if you can slog through it, we'll have some practical recommendations to make at the end of this little exercise, too.
The first Maxwell: GM107
ROP
pixels/
clock
Texels
filtered/
clock
(int/fp16)
Stream
Processors
Raster- ized
triangles/
clock
Memory
interface
width (bits)
Transistor
count
(Millions)
Die
size
(mm²)
Fab
process
Cape Verde1640/206401128150012328 nm
Bonaire1656/288962128208016028 nm
Pitcairn3280/4012802256280021228 nm
GK1071632/323841128130011828 nm
GK1062480/809603192254021428 nm
GM1071640/406401128187014828 nm
The first chip based on the Maxwell architecture is code-named GM107. As you can see from the picture and table above, it's a modestly sized piece of silicon roughly halfway between the GK107 and GK106. Like its predecessors and competition, the GM107 is manufactured at TSMC on a 28-nm process.
Purely on a chip level, the closest competition for the GM107 is the Bonaire chip from AMD. Bonaire powers the Radeon R7 260X and, just like the big Hawaii chip aboard the Radeon R9 290X, packs the latest revision of AMD's GCN architecture. The GM107 and Bonaire are roughly the same size, and they both have a 128-bit memory interface. Notice that Bonaire has more stream processors and texture filtering units than the GM107. We'll address this question properly once we've established clock speeds for the actual products, but the GM107 will have to make more efficient use of its resources in order to outperform the R7 260X. Something to keep in mind.
The Maxwell GPU architecture
A functional block diagram of the GM107. Source: Nvidia.
Above is a not-terribly-fine-grained representation of the GM107's basic graphics units. From this altitude, Maxwell doesn't look terribly different from Kepler, with the same division of the chip in to graphics processing clusters (GPCs, of which the GM107 has only one) and, below that, into SMs or streaming multiprocessors. If you're familiar with these diagrams, maybe you can map the other units on the diagram to the unit counts in the table above. The two ROP partitions are just above the L2 cache, for instance, and each one is associated with a slice of the L2 cache and a 64-bit memory controller. Although these things seem familiar from its prior GPUs, Nvidia says "all the units and crossbar structures have been redesigned, data flows optimized, power management significantly improved, and so on." So Maxwell isn't just the result of copy-paste in the chip design tools, even if the block diagram looks familiar. Maxwell's engineering team didn't achieve a claimed doubling of power-efficiency without substantial changes throughout the GPU.
In fact, Nvidia has been especially guarded about what exactly has gone into Maxwell, more so than in the past. These are especially interesting times for GPU development, since the competitive landscape is changing. Nvidia introduced the first mobile SoC with a cutting-edge GPU, the Tegra K1, early this year, and it faces competition not just from AMD but also from formidable mobile SoC firms like Qualcomm. The company has had to adapt its GPU design philosophy to focus on power efficiency in order to play in the mobile space. Kepler was the first product of that shift, and Maxwell continues that trajectory, evidently with some success. Nvidia seems to be a little skittish about divulging too much of the Maxwell recipe, for fear that it could inspire competitors to take a similar path.
With that said, we still know about the basics that distinguish Maxwell from Kepler. The most important ones are in the shader multiprocessor block, or SM. Let's put on our extra-powerful glasses and zoom in on a single SM to see what's inside.
A functional block diagram of the Maxwell SM. Source: Nvidia.
You may recall that the Kepler SMX is a big and complex beast. The SMX has four warp schedulers, eight instruction dispatch units, four 32-wide vector arithmetic logic units (ALUs), and another four 16-wide ALUs. ("Warps" is an Nvidia term that refers to a group of 32 threads that execute together. These groupings are common in streaming architectures like this one. AMD calls its thread groups "wavefronts.") That gives the SMX a total of 192, uhh, math units—thanks to four vec32 ALUs and four vec16 ALUs. Nvidia says the Kepler SM has 192 "CUDA cores," but that's a marketing term intended to incite serious nerd rage. We'll call them stream processors, which is somewhat less horrible.
Anyhow, Maxwell divvies things up inside of the SM a little differently. One might even say this so-called SMM is a quad-core design, if one were determined to use the word "core" more properly. The Maxwell SM is divided into quads, anyhow. Each quad has a warp scheduler, two dispatch units, a dedicated register file, and single vec32 ALU. The quads have their own banks of load/store units, and they also have their own special-function units that handle tricky things like interpolation and transcendentals.
Nvidia's architects have rejiggered the SM's memory subsystem, too. For instance, the texture cache has been merged with the L1 compute cache. (Formerly, a partitioned chunk of the SM's 64KB shared memory block served as the L1 compute cache.) Naturally, each L1/texture cache is attached to a texture management unit. Each pair of quads shares one of these texture cache/filtering complexes. Separately, the 64KB block of shared memory remains, and as before, it services the entire SM.
Maxwell's control logic and execution resources are more directly associated with one another than in Kepler, and the scale of the SM itself is somewhat smaller. One Maxwell SM has 128 stream processors and eight texels per clock of texture filtering, down by one third and one half, respectively, from Kepler. The number of load/store and special-function units apparently remains the same. Nvidia says the Maxwell SM achieves about 90% of the performance of the Kepler SM in substantially less area. To give you some sense of the scale, the GM107 occupies about 24% more area than the GK107, yet the Maxwell-based chip has 66% more stream processors. Due to more efficient execution, the firm claims the GM107 manages about 2.3X the shader performance of the GK107.
How does Maxwell manage those gains? Well, the higher ratio of compute to texturing doesn't hurt—the SM has shifted from a rate of 12 flops for every texel filtered to 16. Meanwhile, Nvidia contends that much of the improvement comes from smarter, simpler scheduling that keeps the execution resources more fully occupied. Kepler moved some of the scheduling burden from the GPU into the compiler, and Maxwell reputedly continues down that path. Thanks to its mix of vec16 and vec32 units, the Kepler SM is surely somewhat complicated to manage, with higher execution latencies for thread groups that run on those half-width ALUs. A Maxwell quad outputs one warp per clock consistently, with lower latency. That fact should simplify scheduling and reduce the amount of overhead required to track thread states. I think. The methods GPUs use to keep themselves as busy—and efficient—as possible are still very much secret sauce.
One change in the new SM will be especially consequential for certain customers—and possibly for the entire GPU market. Maxwell restores a key execution resource that was left out of Kepler: the barrel shifter. The absence of this hardware doesn't seem to have negative consequences for graphics, but it means Kepler isn't well-suited to the make-work algorithms used by Litecoin and other digital currencies. AMD's GCN architecture handles this work quite well, and Radeons are currently quite scarce in North America since coin miners have bought up all of the graphics cards. The barrel shifter returns in Maxwell, and Nvidia claims the GM107 can mine digital currencies quite nicely, especially given its focus on power efficiency.
Beyond the SM, the other big architectural change in Maxwell is the growth of the L2 cache. The GM107's L2 cache is 2MB, up from just 256KB in the GK107. This larger cache should provide two related benefits: bandwidth amplification for the GPU's external memory and a reduction in the power consumed by doing expensive off-chip I/O. Caches keep growing in importance (and size) for graphics hardware for exactly these reasons. I'm curious to see whether the upcoming larger chips based on Maxwell follow the GM107's lead by including L2 caches eight times the size of their predecessors. That may not happen. Nvidia GPU architect Jonah Alben tells us the L2 cache size in Maxwell is independent of the number of SMs or flops on tap.
Along with everything else, the dedicated video processing hardware in Maxwell has received some upgrades. The video encoder can compress video (presumably 1080p) to H.264 at six to eight times the speed of real-time. That's up from 4X real-time in Kepler. Meanwhile, video decoding is 8-10X faster than Kepler due in part to the addition of a local cache for the decoder hardware. This big performance boost probably isn't needed by itself, but again, the goal here is to save power. Along those lines, Nvidia's engineers have added a low-power sleep state, called GC5, to the chip for video playback and other light workloads.
[Continue Reading]

AMD's Radeon R9 295 X2 graphics card reviewed

SEVERAL WEEKS AGO, I received a slightly terrifying clandestine communique consisting only of a picture of myself in duplicate and the words, "Wouldn't you agree that two is better than one?" I assume the question wasn't truly focused on unflattering photographs or, say, tumors. In fact, I had an inkling that it probably was about GPUs, as I noted in a bemused news item.
A week or so after that, another package arrived at my door. Inside were two small cans of Pringles, the chips reduced to powder form in shipping, and a bottle of "Hawaiian volcanic water." Also included were instructions for a clandestine meeting. Given what had happened to the chips, I feared someone was sending me a rather forceful signal. I figured I'd better comply with the sender's demands.
So, some days later, I stood at a curbside in San Jose, California, awaiting the arrival of my contacts—or would-be captors or whatever. Promptly at the designated time, a sleek, black limo pulled up in front of me, and several "agents" in dark clothes and mirrored sunglasses spilled out of the door. I was handed a document to sign that frankly could have said anything, and I compliantly scribbled my signature on the dotted line. I was then whisked around town in the limo while getting a quick-but-thorough briefing on secrets meant for my eyes only—secrets of a graphical nature, I might add, if I weren't bound to absolute secrecy.
Early the next week, back at home, a metal briefcase was dropped on my doorstep, as the agents had promised. It looked like so:
After entering the super-secret combination code of 0-0-0 on each latch, I was able to pop the lid open and reveal the contents.
Wot's this? Maybe one of the worst-kept secrets anywhere, but then I'm fairly certain the game played out precisely as the agents in black wanted. Something about dark colors and mirrored sunglasses imparts unusual competence, it seems.
Pictured in the case above is a video card code-named Vesuvius, the most capable bit of graphics hardware in the history of the world. Not to put too fine a point on it. Alongside it, on the lower right, is the radiator portion of Project Hydra, a custom liquid-cooling system designed to make sure Vesuvius doesn't turn into magma.
Mount Radeon: The R9 295 X2
Liberate it from the foam, and you can see Vesuvius—now known as the Radeon R9 295 X2—in all of its glory.
You may have been wondering how AMD was going to take a GPU infamous for heat issues with only one chip on a card and create a viable dual-GPU solution. Have a glance at that external 120-mm fan and radiator, and you'll wonder no more.
If only Pompeii had been working with Asetek. Source: AMD.
The 295 X2 sports a custom cooling system created by Asetek for AMD. This system is pre-filled with liquid, operates in a closed loop, and is meant to be maintenance-free. As you can probably tell from the image above, the cooler pumps liquid across the surface of both GPUs and into the external radiator. The fan on the radiator then pushes the heat out of the case. That central red fan, meanwhile, cools the VRMs and DRAM on the card.
We've seen high-end video cards with water cooling in the past, but nothing official from AMD or Nvidia—until now. Obviously, having a big radiator appendage attached to a video card will complicate the build process somewhat. The 295 X2 will only fit into certain enclosures. Still, it's hard to object too strongly to the inclusion of a quiet, capable cooling system like this one. We've seen way too many high-end video cards that hiss like a Dyson.
There's also the matter of what this class of cooling enables. The R9 295 X2 has two Hawaii GPUs onboard, fully enabled and clocked at 1018MHz, slightly better than the 1GHz peak clock of the Radeon R9 290X. Each GPU has its own 4GB bank of GDDR5 memory hanging off of a 512-bit interface. Between the two GPUs is a PCIe 3.0 switch chip from PLX, interlinking the Radeons and connecting them to the rest of the system. Sprouting forth from the expansion slot cover are four mini-DisplayPort outputs and a single DL-DVI connector, ready to drive five displays simultaneously, if you so desire.
So the 295 X2 is roughly the equivalent of two Radeon R9 290X cards crammed into one dual-slot card (plus an external radiator). That makes it the most capable single-card graphics solution that's ever come through Damage Labs, as indicated by the bigness of the numbers attached to it in the table below.
Peak pixel
fill rate
(Gpixels/s)
Peak
bilinear
filtering
int8/fp16
(Gtexels/s)
Peak
shader
arithmetic
rate
(tflops)
Peak
rasterization
rate
(Gtris/s)
Memory
bandwidth
(GB/s)
Radeon HD 797030118/593.81.9264
Radeon HD 799064256/1288.24.0576
Radeon R9 280X32128/644.12.0288
Radeon R9 29061152/864.83.8320
Radeon R9 290X64176/885.64.0320
Radeon R9 295 X2130352/17611.38.1640
GeForce GTX 69065261/2616.58.2385
GeForce GTX 77035139/1393.34.3224
GeForce GTX 78043173/1734.23.6 or 4.5288
GeForce GTX Titan42196/1964.74.4288
GeForce GTX 780 Ti45223/2235.34.6336
Those are some large values. In fact, the only way you could match the bigness of those numbers would be to pair up a couple of Nvidia's fastest cards, like the GeForce GTX 780 Ti. No current single GPU comes close.
There is a cost for achieving those large numbers, though. The 295 X2's peak power rating is a jaw-dropping 500W. That's quite a bit higher than some of our previous champs, such as the GeForce GTX 690 at 300W and the Radeon HD 7990 at 375W. Making this thing work without a new approach to cooling wasn't gonna be practical.
[Continue Reading]

Overclocking the Core i7-4790K

LIKE LOTS OF things in personal computing, overclocking has progressed mightily since its early days. Back when we first started experimenting on Celerons, CPU performance was a scarce and precious resource, doled out in small increments for hundreds of dollars each. Those of us who dared to violate the specs on our processors were viewed with suspicion by our peers and those in the PC industry alike. Sure, what we were doing wasn't technically illegal, but you'd think it might have been, given how some folks reacted. CPU makers talked about the voiding of warranties and, worse, warned ominously of the dangers of electromigration ending your chip's life early.
None of it slowed us down, of course, because PC enthusiasts saw a chance to grab more of that sweet, sweet computing power essentially for free. Raising the clock speed from 300 to 450 MHz meant 50% more oomph for, you know, decompressing those JPEGs that really fly down the pipe over a V.90 modem. For decoding those beefy 192Kb MP3s. For pushing higher frame rates inQuakeWorld. For, uh, making Outlook Express feel extra snappy.
Yes, you could feel the speed difference in a mail client. Those were dark days.
Back then, we truly needed more speed in the worst way, and overclocking was a means of obtaining what you couldn't buy—either because it was too expensive or just couldn't be purchased.  As a result, a great many PC DIYers overclocked their systems. The "free" extra speed was one advantage of having built your own box.
Somewhere between then and now, overclocking sold out. I know how strange it sounds to hear that a quirky practice, something people do, could succumb to the allure of fame and fortune, but somehow, that's what happened.
Specific products became tailored for overclocking, especially motherboards. Companies introduced "overclocked in the box" video cards, which weren't overclocked at all but borrowed the word shamelessly. Meanwhile, overclocking became a competitive endeavor, complete with world records, celebrity practitioners, and corporate sponsors. Liquid nitrogen got involved. Over time, even Intel and AMD got into the act, creating "overclockable" versions of their chips with unlocked multipliers, available for a slight price premium.
The real kiss of death had to be when "overclockers" cooled and hardened into one of the handful of terms used by product marketing people to describe the PC market. You've got your "mainstream" buyers, your "enthusiasts," "gamers," and "overclockers." Individual products are built to appeal specifically to each of these segments. I've seen the PowerPoint slides. I've gotta admit, I've been doing this job for a long time, but I don't know what those terms actually mean. I'm pretty sure that means they're perfectly integrated into the corporate lexicon, which is about talking without saying things.
All of which leads me, implausibly, to Devil's Canyon.
You see, Intel says the new CPUs under this code-name are intended for "overclockers." Does that mean me? Or does it mean some guy with a LN2 pot, a modified motherboard, and a stack of six chips to try while pursuing the SuperPi world record? Honestly, I'm confused on that point. I dunno whether I qualify for this product's target demo.
Then again, as a PC enthusiast and tinkerer, I don't much care either way. I just want to know if there's more free speed to be squeezed out of these things. So let's have a look.
The Devil's Canyon chips
Under the metal cap of a Devil's Canyon processor is the same 22-nm Haswell silicon that drives any other recent Intel Core i5/i7 CPUs. The differences are at the package level, and the biggest one is literally right under that cap: a new thermal interface material, or TIM, between the cap and the chip. Intel switched to a different thermal interface with its first 22-nm chips, and some folks blamed the new TIM for the Ivy Bridge chips' unwillingness to overclock as well as the 32-nm Sandy Bridge processors before them. They claimed the prior TIM arrangement, known as fluxless solder, transferred heat more efficiently. Devil's Canyon has switched to a third option, a "next-generation" polymer TIM known affectionately as NGPTIM. Its goal is to transfer heat more efficiently between the CPU and the cap above it—and thus to the cooling solution sandwiched on top of it all.
Is the stock TIM for Ivy Bridge and Haswell really a problem? I dunno. Many substances (eventoothpaste) can serve competently as the thin layer ensuring solid contact between two surfaces. This TIM issue is a matter of debate, but Intel does seem to have validated its critics by adopting another TIM in these new products.
The Core i7-4770K (left) and the "Devil's Canyon" Core i7-4790K (right)
The other change to the Devil's Canyon parts is visible in the picture above. The package has a modified power delivery arrangement, with more capacitors than in the regular Haswell substrate. Intel says the added caps will "smooth power delivery to the die," which in turn should increase stability and thus frequency headroom. That's the theory, at least.
To its credit, Intel went off of its established roadmap and made these tweaks to Devil's Canyon pretty quickly in direct response to the perceived desires of PC enthusiasts. These products are an olive branch, the first step in a renewed commitment to desktop CPUs.
ModelBase
clock
Max
Turbo
clock
Cores/
threads
L3
cache
Intel HD
Graphics
Max
graphics
clock
TDPPrice
Core i7-4790K4.0GHz4.4GHz4/88MB46001250MHz88W$339
Core i7-4770K3.5GHz3.9GHz4/88MB46001250MHz84W$339
Core i5-4690K3.5GHz3.9GHz4/46MB46001200MHz88W$242
Core i5-4670K3.4GHz3.8GHz4/46MB46001200MHz84W$242
The two models highlighted in bold in the table above are Devil's Canyon parts. Only these two products will get the special treatment, and both of them belong in the unlocked, overclocking-friendly K-series lineup.
The Core i7-4790K essentially replaces the 4770K at the same price, and if you have zero plans for overclocking your CPU, the 4790K is still worthy of your attention. Intel has raised the base and peak Turbo clock speeds by 500MHz, so the 4790K's baseline operating frequency is an even 4GHz. This is Intel's first 4GHz desktop processor, and more importantly, this clock speed bump ensures the largest desktop CPU performance increase we've seen in several generations (at stock speeds, at least.)
The 4690K is less exciting, since it's just 100MHz faster than the 4670K before it.
Both of these chips are rated for 88W of peak power draw, up 4W from the prior models. Intel says any motherboard based on the new Z97 chipset ought to support them. Happily, the firm has allowed older Z87 boards to host Devil's Canyon processors, as well, provided they can deliver the additional power needed. We expect most mobo makers to provide firmware updates to enable support.
Oh, one more thing. Intel has evidently been listening to our complaints on another front. The ARK listings for the 4690K and 4790K say these CPUs support Haswell's new TSX instructions for transactional memory and VT-d for virtualized I/O. In a baffling move, the older K-series parts didn't support these advanced features, apparently because "enthusiasts" and "overclockers" shouldn't care about... performance? I dunno. Like I said, baffling, but happily, Intel made things right in the new models.
[Continue Reading]

Zotac's Z77-ITX WiFi Mini-ITX motherboard reviewed

CAN YOU BELIEVE more than a decade has passed since Via introduced the Mini-ITX motherboard form factor? Man, I'm getting old. Contemporary Mini-ITX boards bear little resemblance to the first examples, though. They may share the same 6.7" x 6.7" footprint, but the similarities end there.
Via created the Mini-ITX form factor to show off its own low-power processors, which were soldered on and really too slow to appeal to enthusiasts. The chipset-based integrated graphics weren't very good, either, and there was nowhere to put a proper graphics card.
Then along came Zotac, which started building Mini-ITX boards that more closely resembled full-sized ATX models. Gone were Via's weak-sauce processors, replaced by standard sockets capable of accepting the fastest desktop CPUs. Zotac also swapped out the PCI slots common of early Mini-ITX designs in favor of PCI Express equivalents ripe for discrete graphics cards. With the addition of a couple DIMM slots for dual-channel memory configurations, the basic template for the modern Mini-ITX board was born.
For a while, Zotac had the market for high-performance mini mobos mostly to itself. But the niche grew, fueled by shrinking processor platforms with ever-expanding peripheral payloads and new cases built to accommodate potent graphics cards and aftermarket CPU coolers. Slowly but surely, the big-name motherboard makers took notice and threw their own hats into the ring. Zotac now faces a much deeper field of competitors than ever before. But the PC Partner subsidiary also has experience on its side, which is why we were eager to check out its flagship Z77 board.
The Z77-ITX WiFi fits the formula to a tee. Its LGA1155 socket is compatible with Intel's latest Ivy Bridge processors, including the top-of-the-line Core i7-3770K. That processor can be overclocked to your heart's content thanks to the lack of multiplier restrictions in the Z77 Express platform. The Z77 also supports Intel's SSD caching solution and Lucid's Virtu GPU virtualization software. You could build yourself one muscular machine with this mobo.
Speaking of muscle, the Z77-ITX has the requisite PCI Express x16 expansion slot. Gamers shouldn't have to go without a discrete graphics card, especially when the Mini-ITX form factor is so ideal for a LAN party box. Dual-channel DIMM slots run up one edge of the board, nicely completing the textbook template.
The Mini-ITX form factor's limited real estate keeps components close to the socket, creating the potential for clearance issues with larger heatsinks that extend beyond the restricted zone, a 3.7" x 3.7" box surrounding the socket. Intel's specifications forbid taller components from infiltrating this region, but motherboard makers have free rein outside the protected area.
Since we can't test every combination of heatsink, memory, graphics card, and enclosure, we've taken some measurements to illustrate the distances between the edges of the CPU socket and notable landmarks, including the boundaries of the board.
The socket is much closer to the PCI Express slot than on the Asus P8Z77-I Deluxe. This placement can be problematic with coolers whose heatpipes snake out and up into wide radiators. The DIMM slots are even closer to the socket, although the distance there is about the same as on the Asus board.
Note the location of the VRM heatsinks, which both stand 28 mm tall. You'll also want to make sure you have clearance for the wireless card sticking out of the vertical Mini PCIe slot. The card rises 32.5 mm off the board and is about the same height as a standard memory module.
Vertical clearance definitely won't be a problem for the second Mini PCIe slot, which orients cards parallel to the circuit board. This mSATA-compatible slot can accept mini SSDs for storage or caching, a nice perk for cases with limited storage bays. The slot sits just to the left of the SATA ports in the picture above.
Like the mSATA slot, the Serial ATA ports are fed by the Z77 Express platform hub. The same chip also provides four USB 3.0 ports: two accessible via an onboard header and two more in the rear cluster.
The port cluster contains a few surprises, including dual HDMI ports and a Mini DisplayPort out. While the HDMI outputs peak at 1920x1200, the DisplayPort connection can push resolutions up to the 2560x1600 supported by typical 30" monitors. If you don't have Mini DisplayPort hardware, worry not; the box contains a full-sized DisplayPort adapter cable for the miniature port. There's no provision for straight DVI output, though.
Zotac doubles down on Gigabit Ethernet in addition to endowing the board with 802.11n Wi-Fi and Bluetooth support. The networking may be robust, but the integrated audio is a little weak. Sure, there are five analog jacks plus a digital S/PDIF output. But the drivers don't support real-time multichannel encoding, restricting digital surround sound to content with pre-encoded tracks. That's fine for movies but no help at all in games. There's no provision to virtualize surround sound for playback on stereo devices, either.
Zotac scores points for putting a clear CMOS button in the rear cluster, making it much easier to reset the firmware after an overclocking misadventure. Buckling-spring aficionados rocking classic IBM Model M keyboards should appreciate the PS/2 port, as well.
For testing motherboards on an open rack, as reviewers tend to do, the integrated power and reset buttons are quite handy. These little extras impart admittedly little value to the average user, but the two-digit POST code display can be very helpful for troubleshooting issues with the boot process.
A few other items add to the overall package. The first is an extension for the auxiliary 12V power connector, which could come in handy if your PSU's cables don't reach. Zotac also throws in an expansion bracket for the internal USB 3.0 ports, including a half-height back plate for low-profile cases. Mini-ITX enclosures don't always have SuperSpeed USB ports up front, but at least there's a way to tap the board's internal headers. That's the Mini DisplayPort adapter at the bottom, by the way.
[Continue Reading]

Logitech's K400 wireless keyboard and touchpad reviewed

WE'VE REVIEWED QUITE A FEW KEYBOARDS here at TR, but nothing quite like this one. The Logitech K400 doesn't have mechanical key switches, glow-in-the-dark backlighting, wicked-fast USB 3.0 ports, or powerful macro functionality. It's a basic wireless keyboard with an integrated touchpad. But it's also only $40, and as far as I can tell, it's an excellent fit for home-theater PCs.
Well, the black one is, anyway. Logitech sent us the white version of the K400, whose overwhelming whiteness is a little much for the living room. The pristine aesthetic is especially prone to being stained by Cheeto dust and other snack residue.
I've seen the black version of the K400 in person, and it's a lot more understated despite having a graphic on the touchpad. That variant is well worth the extra $2 over the cheaper white model. Don't just take my word for it, either; Newegg has 327 user reviews of the black version but only four of the white model.
Coloring aside, one of the most striking things about the K400 is how light it is. The keyboard weighs only 0.89 lbs (405 g) according to my kitchen scale. It's easy to pick up with one hand and comfortable to keep propped on one's lap.
The K400's plastic body keeps the weight low, but it's also a little flimsy. The keyboard bends visibly when held from one side, and the entire body can be twisted with only moderate effort. Our sample is curved slightly, as well. The middle of the keyboard bows up, causing visible flex under heavy-handed typing. My last HTPC keyboard was an Enermax Aurora Micro Wireless, whose aluminum body is much stiffer. But I also paid $80 for the thing, and it's about twice the weight of the K400. On the couch, at least, the difference in weight is more noticeable than the difference in rigidity.
At 13.9" (354 mm) wide, the K400 is a few inches narrower than a full-sized desktop keyboard. The integrated touchpad takes up a fair amount of room, resulting in some shrinkage for the key area. Even with a pared-down, laptop-style layout, the alpha key area is 6% narrower and 7% shorter than our full-sized reference. My XL-sized mitts don't feel overly cramped when typing, but my fingers do feel a bit squished together when resting on the WASD triangle.
For brief bouts of typing, the smaller footprint isn't a problem. Neither is the mediocre key feel. However, I couldn't bring myself to write this review on the K400. The key action is too mushy, and the tactile feedback is too vague. I'm not just spoiled by desktop keyboards with mechanical key switches, either. Even Asus' budget-priced Transformer Book T100 convertible tablet has a better key feel.
To Logitech's credit, typing on the K400 generates very little noise. The media keys work as expected, and there's an extra left-click button in the upper left corner. The keyboard is also loaded with function keys tied to Win8 features like search, settings, and application switching. I found the app switching button especially useful, mostly because the associated gesture is unreliable. Which brings us to the touchpad...
The touchpad's surface is recessed about one millimeter into the keyboard. That might not sound like a lot, but it feels like a big drop when executing Win8 gestures that require dragging one's finger onto the touchpad from an outside edge. My finger doesn't always hit the very edge of the tracking surface as it drops down, which seems to impair the recognition of those gestures. App switching is affected, as is access to the Charms bar and application menu.
Otherwise, the keyboard's gesture support is good. The usual assortment of two-finger gestures work right out of the box, with no need to install drivers. The 3.5" tracking are doesn't feel too constrained, but I wish there were a coasting option to extend two-finger scrolling.
I also wish the cursor tracking felt tighter. My fingertip glides across the smooth touchpad surface with ease, but the on-screen cursor lags noticeably. It's almost as if the cursor is sliding on ice. Fortunately, there doesn't seem to be any latency associated with the keyboard response. I didn't notice any obvious signs of input lag while playing Battlefield 4 with the K400 (and a separate mouse).
The K400 interfaces with the host PC via Logitech's Unifying receiver. This dongle plugs into a USB port and is capable of communicating with multiple devices over a 2.4GHz wireless connection. The dongle itself is tiny; in the picture above, it's plugged into the USB port extender that also comes in the box. Below, you can see the dongle tucked into the battery compartment door.
Logitech says the K400 works up to 33' (10 m) away from the receiver, which matches my real-world impressions. The wireless connection even works without line of sight to the receiver.
I haven't spent enough time with the K400 to confirm Logitech's claim that the keyboard's AA batteries are good for two million keystrokes, or about one year of use. However, I can verify that the keyboard's power-saving measures are unobtrusive. Even after being left idle for days, the K400 still responds quickly to both keyboard and touchpad input. Folks who want to conserve power further can turn the keyboard off completely using a switch located on the front edge.
The K400 isn't perfect. Compromises are required to squeeze a keyboard and touchpad into a lightweight, wireless package that retails for $40 or less. But those tradeoffs are reasonable for home-theater PC duty, especially considering the bargain price. Logitech's other wireless keyboard and touchpad combo, the TK820, sells for $100—more than twice the price of the K400.
Anyone shopping for an affordable wireless keyboard and touchpad combo should have the K400 on their short list. This certainly isn't the nicest keyboard around, but the K400 is good enough for occasional use, and it's a solid value overall.
[Continue Reading]
Privacy Policy | Powered By Blogger · Designed By Blogger Templates | Dmca