Saturday, 7 January 2012

Intel Ivy Bridge CPUs arriving April 2012



Ivy Bridge die shot

Share This Article

According to Taiwanese OEMs, April 8 will be the day that you can get your hands on desktop and mobile Ivy Bridge CPUs. These will be the first commercial chips that use a 22nm process, and — perhaps more importantly — the first silicon chips that use 3D tri-gate transistors (FinFET), instead of ye olde MOSFET that every other manufacturer and foundry are still using.
A total of 13 CPUs will be released on or around April 8: Seven desktop chips will be immediately available, all priced between $332 and $184 and targeted at the low- and mid-range market, the fastest being a Core i7-3770K. Six mobile chips spanning the entire price gamut will be available, including a high-end $1100 Core i7-3920Qm. Chipsets for both desktop and mobile will also be released, including the top-end Z77, and H77, Z75, and B75, and their mobile equivalents.
Before you get too excited, though, bear in mind that Ivy Bridge is not a performance update to Sandy Bridge. Where Sandy Bridge was the tock — new architecture — following Westmere, Ivy Bridge is the tick (die shrink) of Intel’s tick-tock release strategy. That doesn’t mean that IB isn’t faster than SB — some leaked benchmarks show a 2-8% gain — but primarily, Ivy Bridge will consume less power. According to Intel, the Core i7-3770k will have a TDP of just 77 watts, down from 95W on the current top-end i7-2700K.
Ivy Bridge
This is obviously big news for the mobile sector where the CPU, along with the display and backlight, make up the bulk of a device’s power consumption. Presumably, with Intel’s 2012 focus being smartphones, ultrabooks, and the success of Medfield, almost everyone at Intel is focusing on reducing power footprints. Laptops might be by far the most dominant PC form factor, but if I can build a desktop PC that’s fast, saves power, and cuts down on CPU core temperature, I’m not going to complain. The other big change, though it probably won’t affect many ExtremeTech readers, is that Ivy Bridge chips will feature a new, slightly-less-awful integrated GPU.
The power savings, incidentally, most likely stem from the use of 3D FinFETs in Ivy Bridge, and other advances in silicon chip fabrication technologies. Medfield will have to wait until 2013 or 14 for its 3D FinFET re-work, but when it eventually happens Intel might even move ahead of ARM-based designsin terms of power consumption.

My New Year’s resolution: 5760 x 1080


Flight simulator on multiple CRT monitors

Share This Article

It’s that time again, isn’t it? When we’re all supposed to take stock of the year that’s just wound down, and figure out how we’re going to make the upcoming year even better? Three hundred and sixty-five days ago, I wouldn’t have even considered it — 2010 is a top contender for Worst Year of My Life — but 2011 has been considerably better in a lot of ways. So, even though I’m not normally one for publicly doing this sort of thing, just this once I’ll bite.
My New Year’s resolution for 2012 is: 5,760 by 1,080.
No, I’m quite serious. I’ve had it with pitiful, small, and (heaven forbid) one-monitor display setups. In this day and age, they’re as antiquated as the Commodore 128, and I plan on making my computing life as full of spacious triple-monitor arrays as I can. That means dumping my lone home monitor (which has been acting up the last couple of months anyway) in favor of three that will expand my desktop horizons in more ways than one.
Old beige LCD monitorI haven’t come to this decision lightly, either. That monitor I have at home has served me well for more than four years, and as with clothes I’m loath to discard anything that still fulfills its basic function. Besides, that 1920×1200 rotatable display indoctrinated me into the shimmering pleasures and bracing usefulness of widescreen, a then-innovation I cannot imagine living without whether playing games or (sigh) doing work. Ever since I lost myself in the monitor’s endless side-to-side expanse of pixels, I’ve been unable to look at (or cope with) traditional 4:3 monitors with much but disdain.
Alas, first loves don’t always endure. I’ve grown during the last four years, and so have my computing needs. I do a lot more with spreadsheets now than I used to, I edit a lot more images, I watch a lot more online videos while I’m doing something else. A single widescreen panel helps with these things, but I still have to make a lot of compromises in terms of window size and arrangement. And if I want to play a game in a window rather than on the full screen (which I also do now more than I once did), I still have to cover up a lot of other stuff or play at an eye-rollingly small resolution — neither of which I love.
It was only around the middle of 2011 that I realized what I’d been missing. I borrowed some hardware we had sitting around, unused and unloved, to set up a three-monitor system at work. Almost instantly it proved so ideal for my business environment that I wondered how I’d ever lived without it. Outlook I relegated to the right monitor and Photoshop I placed on the left one, leaving the center screen free for whatever the program of the moment was (usually a web browser, Word, or Excel). But I could also have my Twitter client, our intra-office messaging program, and our content management system also all open at the same time and never feel claustrophobic. It is, for me, the ultimate in convenience, something that I just can’t do at home, which is a shame given how much work I do from there these days.
A three-display 5760x1080 setupNow my path is clear. True, achieving my 2012 resolution means getting three monitors of the 1920×1080 variety, even though I’ve never cared for that style as much as 1920×1200 (what can I say, I love vertical space), but that’s a small price to pay for all the other benefits I get. Plus, there’s also a lot more selection: As of this writing, there are ten times more 1920×1080 monitors available on Newegg than there are 1920×1200 monitors (183 versus 18), which drastically improves my odds of finding one of the style and features I want. The smaller monitors are even disproportionately less expensive — the lowest-priced one (from Acer) costs $109.99, whereas the lowest-priced 1920×1200 monitor (from Hanns-G) runs $279.99. Yes, I could theoretically outfit my computer with three of the narrower monitors for only slightly more than it would cost me to buy a taller one. Somehow, I’m okay with that.
Something else that’s changed for the better in the last four years is the video card. It’s a lot easier now to find one, even a budget model, that supports multiple display outputs. Not every card is going to enable five or six (though some, including AMD’s recently released Radeon HD 7970 certainly do), but the proliferation of HDMI and Mini DisplayPort connectors, in addition to (and in some cases supplanting) DVI, have greatly simplified the process of driving displays. And because both AMD and Nvidia have developed special technologies for use with three or more monitors (Eyefinity for AMD,Surround for Nvidia), getting everything working is no longer a throbbing pain in the real panel.

My New Year’s resolution: 5760 x 1080


Flight simulator on multiple CRT monitors

Share This Article

Only one area still troubles me, and that’s gaming. Whether I’m playing something likeThe Elder Scrolls V: Skyrim or Batman: Arkham City for work or merely for pleasure, I want my computer to show them off to their utmost. And so far I’ve struggled to have a good FPS experience across three monitors. When the image arcs around my head for almost seven feet, I don’t always see the value in it. I’m always focused on what’s directly in front of me, but very few games are designed such that I miss much of import from the ultra-extreme sides. Usually the image just seems to get off-kilter that far out (pictured below), when I can even convince myself to look all the way over there at it. Strategy games like Civilization V are ideal for the format because you see more of the “board” but all on a single plane, but there’s a prevailing sense of overkill even with those. I prefer playing at 2560×1600, frankly. (Oops, the cheapest of those monitors costs $1,199.99 on Newegg. Never mind.)
Some statistics show I’m not alone in my apprehension here. In the most recent (November) edition of the Steam Hardware & Software Survey, the most popular overall display resolution is 1920×1080 (used by 24.12 percent of the survey sample). The most popular multi-monitor resolution, for about 10 percent of reporting users, is 3840×1080 — two monitors. Most of the larger options have barely any support at all; for example, the intriguing 5120×2880 is used by just 0.71% — and my longed-for 5760×1080 isn’t even on the list.
Battlefield Bad Company 2, at 5760x1080
But if Nvidia and AMD are still having trouble moving multi-monitor into the mainstream, some enterprising people are still finding exciting things to do with it. One blogger looked at his displays as an important part of a larger artistic and aesthetic statement. In a home office, a five-display configuration brings much-needed order to an otherwise chaotic workspace. One guy even went a lot further than I’ve ever dreamed of, optimizing his house with fold-out stations for LAN parties. Even just a few years ago, these projects wouldn’t have been practical (and they would have been just barely possible).
World of Warcraft at 5760 by 1080
I don’t have the desire to be that kind of trailblazer; I just want to do everything on my computer better than I do it today. For now, three monitors are the next logical step. I can’t imagine I’ll ever need more than that. (Though, come to think of it, I remember thinking something similar when I unpacked my 1920×1200 display in 2007…) Do you use multiple monitors? If so, how many and in what configuration? Or have you done anything really interesting with them, as in the examples above? I’d really like to know what your resolution is, and whether you plan on going bigger sometime soon.
As for me, I need to get shopping. Then I’ll face what will undoubtedly be the biggest challenge of 2012 for me: clearing off my desk to make room for all three displays. But somehow I have a feeling all those extra pixels will be more than worth the trouble.

The rise and fall of the Sony empire



Sony logo, with Walkman watermark

Share This Article

With the recent report that Sony sold all of its S-LCD interests to Samsung in return for $939 million another flare has been shot, warning that the former king of electronics remains on a downward spiral that has no end in sight. A lack of innovation and misguided decisions (not to mention a few natural disasters) have eroded at the foundation of the company while competitors like Samsung and LG have overtaken the electronics giant in markets such as televisions and mobile phones. It wasn’t that long ago that Sony products were considered the creme de la creme of consumer electronics, the pinnacle for technophiles the world over. What factors have caused this the company’s slide into mediocrity? To answer that question, we need to take a look at Sony during its most successful period, the explosion of the ’80s and early ’90s.
It’s difficult to explain to people born after 1990 what kind of cultural impact Sony had during this time. Simply put, Sony was the Apple of its day, a company that released products that were the result of innovation being merged with everyday media consumption habits. When Sony unveiled the now-legendary Walkman in 1979, it fomented a revolution in the way people interacted with music. Buyers flocked in droves to retailers to get their hands on the device that would allow them to bring their music, in the form of analog cassette tapes, wherever they wanted. It was the must-have device of the decade, cementing the Sony brand in the mind of consumers as the name in electronics. Even when rival companies began churning out lower cost knockoffs, consumer demand for the Walkman remained high because consumers trusted the name. No matter the price, people would buy a device if it had the Sony name printed on it. Sony, not Apple, invented the extreme consumer dedication Cupertino now enjoys.
Following up on the Walkman craze of the ’80s, Sony once again changed the entire face of audio recording when it teamed up with Phillips to perfect the compact disc media format. CDs opened up a vast array of possibilities with the ability to give users a “master” copy of audio files as well as the convenience of being able to quickly select different tracks. The quality and amount of music that can be stored on a compact disc vastly outstripped the cassette tape, and once again thrust Sony into the forefront of media consumption innovation.
Sony Walkman vs. Apple iPod (first generation)
Unfortunately, the CD can be seen as the peak of Sony’s influence on the market. While it continued to develop new formats, such as the MiniDisc, none enjoyed the mass adoption that its previous efforts had enjoyed. (Sure, MiniDV and Blu-ray have done well, but only because of a lack of affordable alternative storage mediums. Flash media and streaming online content delivery are making these obsolete.) Sony became a nebulous company that fell prey to both its own avarice and the ability of its competitors to correctly gauge where consumers were going to look next for the next generation of media technology. Namely, the MP3.
If I had to point to a specific day in history that marked the decline of Sony as the worldwide leader in technology, it would be October 23, 2001. This was the date that Steve Jobs took the stage in his mock-turtlenecked glory and announced the next revolution in music, the iPod. In one fell swoop, Apple beat Sony to one of the most important technological advances this century. By giving consumers a device with instant purchasing power and the ability to listen to high quality audio files on the go, Cupertino completely changed the playing field for music consumption, a feat that Sony was no longer capable of.

The next internet: A quest for speed



Internet 2 Logo

Share This Article

What would you do differently with 10 gigabits per second (10Gbps) of data bandwidth to your home and office? If the innovators working on designs for the next generation internet are successful, your dreams can come true. It is easy to take for granted the increasingly massive amount of data we send back and forth every day over the internet and the extraordinary growth in the number of people using the internet, but there is a limit to how much our existing network infrastructure can take — and we’re quickly reaching it. Fortunately a group of university and corporate researchers backed by US government grants and funding from the industry are hard at work designing what could be called Internet 2.0 to address growing needs in speed, scale, and security in the design of the current internet.

Making the internet smarter

In the same way that we talk about a “smart grid” for electricity, one avenue of internet research involves making our current network infrastructure smarter about the data it carries. The decades-old stack of layers architecture that underlies almost all networks — including the internet — deliberately restricts interactions between those layers. This isolates applications from the network they use, making it possible to invent and deploy new transports and even physical networks without rewriting applications. This separation of layers has helped the internet family of protocols spread like wildfire to new types of physical devices like satellites and cell phones, but limits how smart it can be in optimizing data. Without expensive and dedicated caching solutions, for example, the current internet doesn’t know that a million identical copies of a new hit single are being downloaded and that it could just send along one copy to all million users.
Keren Bergman at Columbia has begun to tackle exactly these issues, by creating cross-layerprotocols so that the physical data layers of the network can provide feedback to applications — allowing them to optimize their use of the network based on actual conditions, much like the addition of real-time traffic monitoring has become an essential element in GPS software. Bergman’s vision of a smarter internet, like a smarter power grid, can optimize the bandwidth already in place, but is still limited by the overall capacity of the network — making it an important, but not sufficient, solution to our growing need for bandwidth.

Making the internet faster

Map of the internetGoing in a different direction, working to dramatically increase the bandwidth of the internet, researchers at the University of Arizona and eight other universities — all part of the same NSF-funded Center for Integrated Access Networks (CIAN) that Bergman is — are attempting to create an all optical version of the internet that can provide up to 10Gbps to individual users. By replacing the electrical circuits used to connect the optical fibers that make up the backbone of today’s internet with optical chips, they hope to eliminate enough bottlenecks to increase end-user throughput by almost a thousandfold.
To recreate electrical circuits with optical components, many of the basic building blocks of electronics have to be reinvented for optics. For example, within the past year researchers at MIT developed a way to create the optical equivalent of an electrical diode — a device that causes information to flow in only one direction, and the team at the University of Arizona came up with a method for restoring degraded optical signals. Meanwhile, teams from Caltech and Canada managed to transmit 186Gbps over a 134-mile-long optical network.

The next internet: A quest for speed


Internet 2 Logo

Share This Article

Fixing the protocols: FAST TCP & OpenFlow

Faster pipes are still only part of the solution. The internet’s protocols were developed decades ago, for much slower speeds than needed today, so re-designing them to cope with the planned increase in bandwidth is also a major research area. Even the venerable TCP protocol is coming under fire. Caltech professor Steven Low explains that TCP’s simplistic assumption that failed packets are a result of congestion — and its response of slowing down the sending device — doesn’t fit well with today’s multi-modal network, where the failure could be due to momentary interference with a mobile phone or other wireless signal. He and his colleagues developed FAST TCP, which monitors and reacts to the average delay of packets instead individual packet failures — in the case where some packets are being lost but the average delay is small, FAST TCP will actually speed up the sender to increase throughput, instead of slowing it down.
FAST TCP helped Low and his colleagues set the internet speed record of over 100Gbps in a series of tests in 2003-2006, a record which has only been slightly bettered since. Startup FASTSOFT is working to capitalize on the commercial implications of the speedups possible with FAST TCP.

Internet on steroids: Internet2

Internet 2 USA 100Gbps Planned BackboneLarge Hadron ColliderInternet2 has created a 400Gbps version of the internet that now connects hundreds of groups — and has funding to increase its backbone to over 8Tbps (that’s terabits per second), running over 100Gbps fiber. In addition to featuring the use of optical technology where possible, the Internet2 is an important tool for developers of the next internet — serving as an ideal testbed for new protocols and routing strategies.
Internet2′s OS3E project uses a unique underlying network architecture called OpenFlow — code developed by researchers at Stanford and other universities that allows routers to be flexibly reprogrammed in software — to allow researchers around the globe to prototype and test new protocols directly on top of existing networks.

Fixing routing: IPv6

One key component of the future internet is already in use today. IPv6 is helping address the near-critical shortage of addresses for the older IPv4 addressing system. While the over four billion addresses available for IPv4 (32-bits) must have seemed impossibly large to the pioneers of the internet, the proliferation of smartphones as well as IP-addressable consumer and industrial devices has nearly used up the total. IPv6, by contrast, offers 128-bits of addressing — equal to 340 undecillion or 3.4 x 10 to the 38th power. Possibly enough for quite a few planets full of people, robots, and smart appliances.
Less discussed are some of the other innovations in IPv6. It has a much improved multicast capability — which might allow for much more efficient large-scale broadcasts of popular events like concerts or even TV shows over IP. IPv6′s support for a stateless configuration protocol using ICMPv6 (Internet Control Message Protocol version 6) may help make common DHCP issues a thing of the past. In a nifty twist, entire subnets can be moved without needing to be renumbered, and mobile device addressing is also improved. Individual packets in IPv6 can also be as large as four gigabytes, enough for an entire DVD — although of course the use case for such large single packets is likely to be limited to high-speed backbones, at least for now.

Testing the next internet

Internet 2 International Reach MapUnlike the current internet, which had the luxury of growing relatively slowly from a few dozen and then a few hundred research nodes over several decades, the next internet will need to spring nearly fully formed into a world where literally billions of nodes are online. To test all the new protocols and strategies being invented scientists have developed GENI, a testbed of virtual networks overlaying the physical infrastructure of the internet and allowing it to serve as a worldwide testbed running on top of Internet2.

When can I get one?

Despite the name, Internet2 isn’t really an entirely new network — nor will it ever completely replace the internet we use today. Instead, the results of the research on Internet2, and the technologies developed to support it, will be rolled out over and alongside the current internet — much like IPv6 is being rolled out in phases to replace IPv4. As demand for applications like digital telepresence and virtual libraries continues to grow they’ll first be deployed over the current Internet2 to its members, but then over time will spread to the larger internet community. No doubt the growing need for high-performance multi-player gaming and the streaming of HD movies will be equally important in driving the deployment of the new, more capable, network solutions that are being prototyped in Internet2.


FreeDOS 1.1 released after 17 years


FreeDOS Autoexec.bat editing

Share This Article

Some 17 years after its first release in 1994, and more than five years since 1.0, FreeDOS 1.1 — the definitive, open source version of MS-DOS — is now available to download.
The history of FreeDOS stems back to the summer of 1994 when Microsoft announced that MS-DOS as a separate product would no longer be supported. It would live on as part of Windows 95, 98, and (ugh!) Me, but for Jim Hall that wasn’t enough, and so public domain (PD) DOS was born. Other developers quickly jumped on board, a kernel and utilities were made, and a usable version of PD-DOS began to emerge. It wouldn’t be until 1998 that the first alpha build (version 0.05) was released, however; a very slow trend that would continue, with a slew of betas culminating in a final 1.0 build in 2006, some 12 years after the project begun. Eventually, it would be renamed FreeDOS.
When I tell the story of FreeDOS to my friends the next question is usually: “So, like, what’s the point of FreeDOS?” — a fair question, given the maturity of Linux and its massive support framework. Well, for a start, FreeDOS is already extensively used by recovery disks. If you’ve ever made a boot disk for the sake of checking your hard disk or memory, or fixing a broken installation of Windows, you probably used FreeDOS.
Doom under FreeDOSBeyond that, though, FreeDOS is actually a very good environment for educational or simple systems. Linux, compared to MS/FreeDOS, is very fat. When combined with QBASIC or DJGPP (a C/C++ development environment), FreeDOS makes a surprisingly good development platform. It’s also important to point out that FreeDOS isn’t actually an “old” operating system: It supports FAT32 (with LBA) and UDMA for hard drives and DVD players, and the FreeDOS distro comes with an antivirus scanner and a BitTorrent client. USB support isn’t quite there, but USB keyboards, mice, and external storage can be finagled into working.
When it comes down to it, though, the reason I like FreeDOS is that I can run it inside VirtualBox and play Doom. With a burst of nostalgia, I can fiddle around with HIMEM and EMM386 and Autoexec.bat to eke out just enough conventional memory to play Cannon Fodder. Ultimately, though, with DOSBox providing a much better (if less real) gaming experience, I would have to admit that FreeDOS is mostly just a curio for old-timer geeks.

Stock Android theme mandatory on all ICS devices, says Google



HTC Sense vs. Android 4.0 Holo theme -- choices, choices...

Share This Article

One small step for Android, one giant leap towards iOS and WP7: Google has announced, with surprisingly little fanfare, that all devices with Android Market installed must have the stock Android 4.0 “Holo” (Nexus) theme/skin/layout installed.
“All devices with Android Market” is Google’s way of saying “all legitimate Android devices.” Basically, Android the OS is completely free to use by anyone, but Google keeps tight control of which phones and tablets can access the Market, and which devices come pre-loaded with its apps (Gmail, Maps, Navigation, and so on.) In essence, Google is mandating that carriers and OEMs must include the default, Nexus theme on all Android 4.0 devices.
Now, before you get too excited, this doesn’t seem to be quite as simple as “all Android devices will come with the stock skin.” The wording is a little more complicated than that. It seems like the Holo theme must be installed on every phone and tablet, but Google then says “We have no desire to restrict manufacturers from building their own themed experience across their devices.” The basic gist of it is that apps, in Android 4.0, will be able to choose whether to use Holo buttons and widgets, or the manufacturer’s widgets (Sense, Motoblur, TouchWiz, etc.) In other words, if you buy an HTC phone, you might soon have a mix of user interfaces to contend with: Sense on the homescreen and settings menus, but Holo apps.
Holo Light, an alternative Holo themeOn the other hand, though, if every Android 4.0 device comes with the Holo theme installed, will that mean that every user can simply select Holo as their default launcher? Much like you can switch between ADW, Go Launcher, and stock, will you be able to simply toggle Sense on and off? Currently, non-Nexus phones don’t have the stock launcher installed, so you have to root your phone to install it — but now, every Android device has to have the Holo launcher, which should make switching very easy indeed. We’ll have to wait until the first non-Galaxy Nexus Android ICS phone arrives to find out if this is the case.
If you’ve bought a Galaxy Nexus, or used a custom ICS ROM, it’s impossible to ignore the similarities between Android 4.0, and iOS and Windows Phone 7. Android has consistently struggled to achieve the same levels of (perceived?) smoothness and as iOS and WP7 — and now, with this mandate, Google is effectively admitting that a Wild West orgy of customization isn’t necessarily the right way forward. For the first time, Google has a competitive interface, and it wants to make damn sure that it seizes the advantage.
Let’s just hope that Google doesn’t go too far and lock down the interface entirely, ala Windows Phone 7. We all know how that has fared.
For more info, including developer-related notes, hit up the official Android Developer site.
Update: We’ve had our in-bunker Android geeks look at this one, and it seems like all Android ICS devices will have the same Settings pages, but that the Homescreen and App Drawer will probably still be customized by the OEM/carrier. Google’s announcement really is rather vague, though, so it’s probably wiser to just wait and see.

Japanese government building defensive computer virus; Skynet incoming?



Virus detected!

Share This Article

In a move that proves that Godzilla isn’t the only worldwide threat to emerge from Japan, the Japanese Defense Ministry has been working with Fujitsu since 2008 to develop a defensive, weaponized computer virus capable of tracing the path of a cyber attack to its source in order to shut it down, disabling every system it comes across along the way.
With the ability to disable the attacking program on its own and drill to the source of the attack, the implications for widespread damage across the internet are massive. In theory this virus could attack and disable servers and PCs connected to the internet across the globe if pointed at the right target and using an exploit that was considered “zero day.” That’s the worst kind of doomsday scenario, and the likelihood of every other electronically savvy world power already working on similar virtual weapon platforms for “defense” is pretty high, but this is still alarming news.
Cyberweapon attack pathThe theory of the system is that security equipment detects an attack on a network that it is actively defending. The virus is launched as a defensive measure, and it immediately begins to unravel the attack, disabling middleman machines along the way as it works its way back to the source (pictured right).
The problem becomes obvious almost immediately. The “springboard” computers that are shut down by the virus on the way to the source are likely personal PCs or corporate machines that are being used without the knowledge of their owners. There’s also the issue of the affected machines being in another country, which could open the floodgates of international incidents or worse.
What if the code for this virus was open-sourced, say, for security review? Government entities that throw millions of dollars into electronic warfare applications can build some seriously sophisticated worms, as we’ve seen with high-profile breaches of US corporations by China and Iran’s uranium enrichment plants by (allegedly) the US and Israel. Even Google hasn’t been immune to the war being raged over the internet, with its own breach of hundreds of Gmail accounts back in June. While Google and others have been unable to prove without a doubt that a government is behind the attacks, it’s clear that immense resources are being channeled into the internet as a theater for attacks.
Terminator, looking menacingTo speculate a possible, admittedly far-fetched scenario, let’s say the Japanese government open-sourced this virus or it was leaked to the internet. Months later a virus could be released that targeted machines using a particular internet protocol that it was told was an offending virus or attack; say, XMPP. XMPP (Extensible Messaging and Presence Protocol) isn’t actually a virus, it’s an open chat standard used by many clients (including Facebook chat) to connect people. But if this virus was told to seek and disable any machines utilizing XMPP… well, you get the idea. Facebook has millions of active users at any given time, and XMPP is a popular protocol for business communication, too (Skype uses it). If the virus was let loose with a zero day vulnerability payload it could wrack up a devastating path of destruction across the internet.
The threat of the latest and greatest virus being unleashed upon the internet is always a concern. The big question is whether governments should be spending their money and research on a virus, defensive or not. Let’s not forget that Skynet started as a defense program built by Cyberdyne Systems for the US. We all know how that “Global Digital Defense Network” ended up.