Friday, September 3, 2010

Intel's Xeon 5600-Series Server Processor

Intel 5600-SeriesA mere 5 years ago Intel completely took the way desktop computing was headed and turned it on its head when they introduced their first dual-core Pentium processors. However, Intel soon realized they were "going against the grain" by trying to push frequencies beyond 10GHz. So Intel shifted focus from surpassing to equalizing.

The only problem with this is that servers and workstations were already using multi-socket configurations to get things moving faster. At this point, Intel's Xeons were getting royally beaten by the Operton from AMD. The Xeons were single-core processors in dual-processor boards that were only slightly aided by the same Hyper-Threading technology we know of today.

It is true that the incorporation of threaded software has been slow for the desktop market whereas business-class workstations have been enjoying multi-core CPUs for quite some time. The cost savings of switching from a single-core, dual-socket system to a dual-core, single-socket box is intense.

As hardware gets more and more powerful, software changes to take advantage, necessitating even more capable hardware. Intel launched their Xeon 5500-series CPUs for dual-socket servers and workstations. The 5500-series was characterized as the most important introduction in more than a decade, and it definitely was for Intel.

AMD had an architectural advantage by using HyperTransport, which was especially pronounced in multi-socket machines. On the other side you had Intel, who still relied on shared front side bus bandwidth for processor communication. With the introduction of the 5500-series, Intel addressed their weakness via QuickPatch Interconnect which added Hyper-Threading and Turbo Boost to help improve performance in parallelized and single-threaded applications.

But Intel wasn't finished yet. This year's switch to 32 nm manufacturing allowed Intel the opportunity to add complexity to their SMB-orientated processors without altering the thermal properties. This is where the Xeon 5600-series comes into play, which supports up to six physical cores and 12MB of shared L3 cache per processor all within the 130W envelope that was created by the 5500-series.

Intel has announced that the latest 5600-series is not a contender in the workstation market right now. In order to be competitive in that market, Intel would have to pair competent processors with no less than fairly-modern core logic. Regardless, there is still plenty of hardware to compare, including a Core i7-980X. The Xeon 5600-series Server Processor is on sale now for a hefty $1,700 and is definitely one of the best servers on the market today.


Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Wednesday, August 25, 2010

Three reasons to consider Windows Home Server


In homes today multiple computers are commonplace. The children may have laptops for school work while each parent has their own desktop. When it comes to storing files and media sharing among computers, a family may want to look into having their own dedicated home server. With Windows Home Sever, it is easier then ever to set up a server for a family home. Below are a few qualities of the Windows Home Server OS that make it desirable for home use.

File sharing made easy

With Windows Home Server every computer in the house can access the server, all the way up to ten PC's. The network administrator will have access to all of the files as well as a secure password protected folder for storing personal media and files. PC's will read the server as a regular network-formatted storage device, making it easy to clear out clutter on personal PC's.

Connect to an Xbox or PS3

The latest version of Windows Home Server includes Windows Media Connect UPnP, which makes it easy to connect to video game consoles such as the Xbox 360 and Playstation 3, and once connected, movies and other media can be streamed from the home server to the consoles. This makes it a breeze to watch a movie stored on the server on a television, or play a music play list without burning it to a CD. Window Home Server's Power Pack 2 update added support for MP4 files and metadata as well, so it is even easier to share among other devices no matter what the file format.

Backups are simple for everyone in the house

Usually the main reason for having a home server is to be able to have all files backed up, and Windows Home Server makes it simple to keep all files from being lost. Daily backups can be scheduled for the whole drive on each computer connected to the server. There is also the option to exclude single folders from the backup. All files are stored as they were originally, not as backup image files, so a Word file will remain a Word file, and a jpeg will remain a jpeg. Windows Home Server also lets the administrator view the statuses of the firewall and antivirus on all computers linked to the server.

With these features, plus many more, Windows Home Server is something to look into for households with more than one computer. It helps keep all files backed up, in case of hard drive failure, and makes sharing and streaming very easy.



Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Friday, August 20, 2010

Quit Dealing with Old School Server Management

Server RoomHearing somebody talk about walking up to a server system to install an operating system may sound the same as somebody saying they had to get up and turn the channel on the actual television set instead of using a remote. To many, this concept may seem aged and outdated but it isn't, it is still going on today. It's actually more prevalent than you might think.

Contemporary data centers nowadays brag about high security with retina scanners, powerful magnetic locks and temperatures cold enough to make ice cubes. However, in a lot of cases these systems lack the necessary connectivity to manage all those systems remotely. Thankfully it isn't too late to fix this situation thanks to out-of-band management.

Out-of-band management involves using a dedicated server port connected to an IP network that allows administrators to work with a system regardless of the power state. To put it more simply, out-of-band management allows you to work with a system as if you had physically walked up to the actual console. You can power the system on and off, change BIOS settings and set up RAID devices using this remote management option.

It used to be that you packed up your collection of CDs, floppy disks and your laptop and headed for the data center in search of the needy server system. You would typically waste an hour gaining access to the data center floor and finding the system you needed. It also used to take a few minutes to figure out if the server you were working on was in fact the correct one and if it was cabled correctly.

Once you got started, it would generally take you around three hours, including reboots, to install the OS, patch it, configure it and ready it for remote access through VNC or Terminal Services. It was only after all of this that you could head back to your desk to finish the project that would most likely suck out the rest of your day.

Integrated Lights-out Management (ILOM) removes the need to walk to and physically touch every server system in the building. ILOM provides an integrated, free and powerful management method. ILOM comes standard with most contemporary racked and blade systems and delivers remote keyboard, video and mouse. This allows you to completely manage your system from power up, through the whole boot sequence and into the operating system.

Setting up ILOM is pretty simple although it does require a slight bit of planning. You will have to decide on a static or dynamic IP addressing for the service and if the ILOM network will be isolated or open. A static IP address requires more setup and more management but does have the advantage of having the IP address tied to a specific system for the life of that system. Dynamic addressing requires less management and setup on the system side, however, you will need a server dedicated to assign and track those dynamic addresses.

An isolated ILOM network prevents any unwanted connections by anybody who is not an administrator. If you isolate your ILOM network, it will also prevent IP addressing confusion with primary production, secondary production or backup interfaces. Data center management should only require the configuration of your server's built-in ILOM ports, so save yourself the physical trouble and get integrated lights-out management.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Thursday, July 15, 2010

China's Right on the Tail of Jaguar Supercomputer in Top500

Nebulae Supercomputer Second Fastest in the WorldChina’s ambition to become a major power in the supercomputing arena has become plainly obvious with the introduction of a supercomputing system nicknamed "Nebulae", which has earned the title of second-fastest supercomputer in the world with a Linpack performance of 1.271 PFlop/s. The Nebulae system itself is a hybrid design comprised of a Dawning TC3600 Blade system with Intel X5650 processors and NVidia Tesla C2050 GPUs. Despite being ranked #2 on the Top500, Nebulae is currently praised as being the fastest system worldwide in theoretical peak performance which is rated at 2.98 PFlop/s, the highest ever seen in the Top500. For a quick reference at just how fast this system can crunch numbers, a single minute of calculations from the Nebulae system would take your home computer over three weeks to complete.

Currently the United States still dominates the list, holding the #1 spot with its Jaguar Supercomputer housed at the Oak Ridge National Laboratory in Tennessee which has a peak performance of 1,750 trillion calculations per second. By comparison, the Jaguar System is over 33% faster than the Chinese contender but pales in comparison in theoretical yield which only reaches 2.3 petaflops. In addition to the Nebulae system, China has a total of 24 high performance systems on the Top500 with the Tianhe-1 supercomputer ranking in at number seven.

China is without a doubt rapidly becoming a major player in high performance computing and is seeking to solidify its holdings in the supercomputing world. Currently it is rumored that Dawning, the company responsible for the Nebulae machine, is currently developing an even faster machine for the National Supercomputer Center in Tianjin, China. The main purpose behind this machine will be to model industrial research such as aircraft design, aerospace fundamentals, and petroleum exploration. In a stark contrast, many of the US machines which are owned by the government are used to monitor nuclear weapon stockpiles.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Friday, July 2, 2010

Oracle Unveils New High-Performance Sun Fire Clusters

Oracle SunFire x86
Holding true to CEO Larry Ellison's promise that Oracle would primarily focus on the high-performance server market, last Monday Oracle expanded its line of x86 Sun servers with new rackmount, blade and network clustering servers. The newly released Sun Fire x86 Clustered Systems are intended for massive server configurations and will deliver a smaller footprint than previous generations of Sun File Server hardware. The servers themselves will come equipped with Xeon 5600 and 7500 processors, the latter is aimed at mission-critical systems which must always run and never go down. In doing so Oracle is competing against Intels Itanium processor line and even Sun's own SPARC line which has raised a few eyebrows.

The Sun Fire x86 Clustered Systems are designed for customers that run a mix of Oracle and non-Oracle enterprise workloads across a variety of systems. The cluster servers themselves consist of five rackmounted and two bladed servers that can hold two to eight processors and have been tightly integrated with Oracle software, middleware and management applications. Oracle has assured their customers that all Oracle software and middleware has been certified to run on these systems which have been optimized for Oracle Solaris, Oracle Enterprise Linux, and Oracle VM which supports Red Hat, Suse Linux, and even the KVM hypervisor.

In addition to optimizing their software and operating systems, Oracle has integrated Sun's Ops Center with its own Enterprise Manager providing a dynamic work flow for a single "lights-out" point of management. Blades, servers, storage, networking, virtualization and even powering the systems on and off can be handled all from a single web browser. Oracle has even included an Integration Assistant so you can configure and boot the systems straight out of the box within minutes. Even with no OS installed, the systems can reach out across the Internet to Oracle servers to check for firmware and BIOS updates and then download and install them.

"We claim we can manage a full blade ecosystem without requiring any network skills, because network virtualization is done in the silicon and through Oracle middleware technology," - Dimitris Dovas, director of product management for Sun hardware at Oracle

Oracle claims that this hardware will be able to deliver up to a 45 percent improvement in energy efficiency over previous generation systems and also the ability to run 70 percent more workload. They also claim that their new hardware can deliver the same performance in one-tenth the space with one-tenth the devices and one-fifth the number of network cables. The simpler cabling on the Sun Fire systems come through oracles Sun Blade 6000 10 GbE switches which are embedded within the blade itself or inserted through the top of the rack to connect the clusters. This improvement allows the server clusters to communicate to a network without having to connect though a networking cable and a switch.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Monday, June 7, 2010

1 Petabyte Holographic Storage Coming Soon?

1 Petabyte Holographic Storage Disc
Just a few years ago InPhase announced its revolutionary 300GB holographic disc which was supposed to change the way we store data in high volumes. Shortly afterwards Call/Recall announced their plans to develop a 1TB optical drive and disk with backwards compatibility with Blu-ray. Fast forward a few years to today and holographic imaging still hasn't caught on. Yet there is another holographic hopeful taking a shot at the ultra high-capacity optical disc market by the name of Storex Technologies and their Hyper CD technology. This time however its not a terabyte they are aiming for, but instead a whopping amount of 1,000,000 GB.

"The company holds patents on glass and glass-ceramics compositions as well as read/write mechanics and optics concept(s) applicable to high-density data storage. Using commercially available low power lasers and optics, capacities of more than 1,000,000 GB (1 PB) can be achieved using a CD size disk of 120mm in diameter and 1.2mm thick."

Storex Technologies was founded by Romanian scientist Eugen Pavel in 2007 and is, in effect, nothing more than a technology demonstration group looking for a partner to invest in their intellectual property. Pavel is known as a reputable scientist and has been conducting research in the fields of fluorescent photosensitive glasses and glass-ceramics for many years.

The idea is based on glass-ceramic discs and laser diodes to record information inside the virtual layers of a CD-sized fluorescent photosensitive glass via 40nm-wide lasers. The layers are said to be 700nm apart - but we don't know how many layers per disc - and data access is said to occur at DVD-like speed. Storex claims a 5,000 year life for the disks, but this is merely a theoretical estimate since there is no physical product available for testing. The technology is still in the earliest stages of development and it may be a long wait before we see the concept in action.

With so many failed attempts to bring holographic disk drives to the market, we sincerely hope this concept doesn't fade into oblivion like the rest.

Looking for other Technology Rental information? Visit the Rentacomputer.com Articles Page for a variety of technology rental ideas.

Wednesday, June 2, 2010

AMD Processors Continue to Dominate Top500 Supercomputers

AMD Based Jaguar Supercomputer

If you recall last year, the Jaguar Cray XT5 Supercomputer topped the list as the worlds fastest supercomputer utilizing six-core AMD Opteron processors. Six months later, AMD continues to hold the reigning spot on the TOP500 Supercomputer List announced just a few days ago at the International Supercomputing Conference in Hamburg, Germany. Jaguar continues to be the world's highest performing system featuring nearly a quarter million cores. The Cray XT5 was improved nearly 70% last year and continues to process 1.75 petaflop per second, up from 1.04 in June 2009.

Additional Top 10 systems based on AMD technology are:

#3: Roadrunner - Los Alamos National Labratory: A hybrid system from IBM utilizing BladeCenter cluster technology in conjunction with AMD Opteron processors and has a processing speed of 1.042 petaflops per second.

#4: Kraken - University of Tennesse: A Cray XT5 system similar to Jaguar which peaks at .83 petaflops.

#7: Tianhe-1 - National SuperComputer Center, China: Another hybrid system using ATI Radeon graphics processors from AMD and has a processing speed of .56 petaflops per second.

The number of AMD technology-based Supercomputers on the TOP500 now stands at 51 with systems that can be found across the globe including in Japan, the United Kingdom, Germany, Switzerland, and Norway. AMD technology currently drives more than 4.2 Petaflops of computing power in the TOP10 alone which is used by universities and national labs to conduct research in engineering, finance, climate predictions, and energy efficient designs. In addition, Cray has recently announced plans for its next-generation Cray XE6 supercomputer which will be based on AMD Opteron 6100 Series processors and have the ability to scale to more than 1 million cores.

“Our customers are selecting AMD platforms for supercomputing because they provide the cores, the memory, the power savings and clearly the performance that the world’s leading research institutions require for their ground-breaking work,” said John Fruehe, director, Server and Embedded product marketing at AMD. “AMD has been a leader in delivering the benefits of x86 and open source computing to the HPC community and it will be exciting to see what further advances the AMD Fusion™ family of Accelerated Processing Units (APUs) will bring.”

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.