Saturday, May 15, 2010

EPA Delivers Draft 1.0 for Data Center Storage

EPA Draft 1.0The US Environmental Protection Agency (EPA) is expected to soon establish the final standards for its Energy Star certification for data centers. Currently the organization is holding sit downs with various storage firms and looking for feedback so that it can move forward with more precise standards for data storage systems such as enterprise hard drives and solid state drives. So far the EPA has already established Energy Star ratings for servers but as you can imagine establishing energy standards for storage solutions is a considerably more complex task. Unlike appliances such as a personal computer or printer, the efficiency of a data storage unit can depend on a wide range of variables such as configuration, controllers in use, power supplies and even software.

The EPA has made steady progress since April of 2009 when it first announced it would be moving forward with the program. Most recently the EPA has been collecting data from December 2009 through March 2010 to gain a better understanding of the dynamic between hardware/software configuration and energy efficiency, active and idle state performance, and sensitivity to single-configuration changes. The EPA has released the results of the research at this stage, entitled Draft 1 Version 1.0 Specification, which can be downloaded for free courtesy of Energy Star. If you're technically inclined the report has some pretty interesting results and may be worth the read.

Draft 1.0 comprises the idea of a "product family" certification, due to the fact that storage devices have a greater level of customization and configurability of products. The report also sharpens several key definitions. For example, the definition of a "storage product" includes components and subsystems that are considered an "integral part" of the storage product architecture, but specifically excludes products that are usually associated with a storage environment at the data center level. Only the storage product can be subject to Energy Star certification -- subsystems and components are not eligible for certification. The Draft also defines Active State, Ready Idle State, and Deep Idle State for those that want to take a look. If you happen to have comments about Draft 1, they're due to the EPA by May 21.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Tuesday, April 27, 2010

Former Fujitsu President Sues After Termination

Ex President Kuniaki Nozoe Threatens to SueThe ex-president of Fujitsu, Kuniaki Nozoe, is now threatening to sue the IT services giant for damages over losses suffered by the company and has even asked the corporation to sue some of its own executives. What prompted the legal action from the former president was his forced resignation last September. This March he wrote the company asking that his resignation be nullified and reversed, a tactic which hasn't gone very well for the former president. In response, Fujitsu alluded that he had been forced to quit due to his ties with organized crime. In fact the board said it had previously warned Nozoe that such links were in conflict with "the Fujitsu Way".

Fujitsu first announced the resignation of Mr Nozoe in September 2009 citing health issues. However last month they admitted that the president had been forced out following an investigation into his business links. The investigation found that Mr Nozoe had a relationship with a third party company said to "have an unfavourable reputation" - a common phrase used in Japan to infer that one has ties to the Yakuza. Nozoe stated that his relationship was merely personal but upon being confronted with the allegations the board and Mr Nozoe agreed with one another to issue a statement attributing his departure to poor health rather than blame the unnamed third party. Although Nozoe did not break any laws Fujitsu maintains that he failed his duties as president.

The episode has since raised questions over the role of organized crime syndicates in big Japanese business. "The suggestion that a major Japanese company has been linked with the yakuza is not surprising," said Dr Seijiro Takeshita, a director at the Japanese bank Mizuho International "Associating with gangsters has often been a part of doing business in Japan - including even the banks." The Tokyo Stock Exchange has since given Fujitsu a strict warning over the issue.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Thursday, April 22, 2010

The Largest Cloud in the World is Dark, Shady, and Criminally Owned

When thinking of the largest cloud computing network known to man, what companies come to mind? Microsoft? Sure they have alot of computers but not even close. Amazon? Getting bigger but still not even in the same ballpark. Google? As monstrous as their cloud is, its a mere drop in the ocean. The largest cloud in the tech world isn't controlled by a brick and mortar corporation, but rather it is a network of computers controlled by the Conficker computer worm across more than 200 countries in the world. So just how big is the worlds biggest cloud?

"Conficker controls 6.4 million computer systems in 230 countries at 230 top level domains globally with more than 18 million CPUs and 28 terabits per second of bandwidth." said Rodney Joffe, senior vice president and senior technologist at the infrastructure services firm Neustar.

In other words the biggest cloud on the planet is controlled by an unknown criminal enterprise that rents out their botnet to send spam, perform a denial-of-service attack, hack computers, spread malware, and steal personal information and money. In fact, it is believed that much of the comment spam that plagues many blogs is spawned from a portion of the conficker cloud. Put simply, the cloud is "mobbed up."

In many ways, the Conficker cloud is much more competitive than legit vendors. The operators have experience with the virus dating back to 1998 and their footprint is bigger than any cloud previously seen. On top of that there are no moral, ethical, or legal constraints with the added bonus of zero costs. There is even an unlimited supply of new resources readily available as the conficker spreads far and wide to take over and steal more computing power.

Just like legitimate cloud vendors, Conficker is available for rent and can be found just about anywhere in the world a user would want their cloud to be based. Users can choose the amount of bandwidth they want, the kind of operating system they want to use, and even what kind of services will be installed into the cloud such as spam distribution, dos attacks, etc.

By the way, just in case you were wondering, the biggest legitimate cloud provider is Google which is made up of approximately 500,000 systems, 1 million CPUs and 1,500 gigabits per second (Gbps) of bandwdith. Coming in second is Amazonwith 160,000 systems, 320,000 CPUs and 400 Gbps of bandwidth. The third largest legit cloud is owned by Rackspace, which offers 65,000 systems, 130,000 CPUs and 300 Gbps.

Although the last major attack performed by the Conficker cloud occurred over a year ago against the Manchester police department, the virus is still considered a very real and palpable threat. If you fear you are infected by the Conficker virus you can try out this Conficker Eye Chart which pulls images from three sites that Conficker is known to block and displays them in a box. If all the images show up you're in good shape, but if one or more doesn't display it could indicate a Conficker or other malware infection. Be aware that if you are browsing from behind a proxy, you may be able to see all the images and still be infected.


Looking for a short term file server rental for your next proof of concept or data center move? Call www.rentacomptuer.com at 800-736-8772 today!

Thursday, April 15, 2010

x86 Server Market Directs Microsoft to End Itanium Development

Itanium serverMicrosoft has announced that it will no longer support development for Intel's Itanium processor effectively placing current Itanium products into maintenance status for the next three years with support ending entirely in eight years. Microsoft also stated that the current versions of Windows Server 2008 R2, SQL Server 2008 R2, and its developer tool Visual Studio 2010 will be the last versions to support the Itanium architecture. For those wondering exactly why Microsoft would make this move, Joe Clabby, President of Clabby Analytics, offers his thoughts on the decision.

"Here's what really happens: Microsoft has invested in x86 architecture. People don't want Windows on Itanium. They want HP-UX on Itanium and maybe some NonStop and OpenVMS, but they have not done jumping jacks over Windows on Itanium. Microsoft is saying its committing heart and soul to x86 multicore and that's what the market wants,"

While the move is yet another blow to the Itanium line, losing Microsoft is not as painful as one would think. Approximately 80 percent of Itanium sales are from HP, which runs HP-UX, NonStop or OpenVMS. Windows and Unix are merely a small portion of their business. However, the marketplace continues to gravitate towards the architecture proposed by Advanced Micro Devices, which added 64-bit extensions to the x86 processors used by many mainstream servers and PCs. Although Microsoft has offered 64-bit versions of Windows Server for both types of chips, the x64 versions have proven to be far more popular than the Itanium ones. Microsoft's reasoning for the decision seems to be sound.

"The natural evolution of the x86 64-bit ('x64') architecture has led to the creation of processors and servers which deliver the scalability and reliability needed for today's 'mission-critical' workloads," Reger said in a blog post. "Just this week, both Intel and AMD have released new high core-count processors, and servers with eight or more x64 processors have now been announced by a full dozen server manufacturers. Such servers contain 64 to 96 processor cores, with more on the horizon."

Despite waning mainstream support and the fact that Itanium has never been a big seller, the chip remains as an importance figure in the market seeing as its the processing power backing HP's high-end server line. In addition, Intel continues to develop new versions of the processor, most recently the Itanium 9300 which was introduced in February, and has promised at least two more generations codenamed "Poulson" and "Kitson". While the immediate future seems secure for the Itanium series of processors it remains to be seen just how far they will be able to go.


Looking for a deal on a file server just back from rental? Check out the just back from rental computer inventory at www.rentacomputer.com or call 800-736-8772 today!

Monday, March 29, 2010

AMD - "Welcome to the World of 12 Cores"

AMD Opteron 6100 Server ProcessorAMD has kicked off this week by debuting its new "Magny-Cour" server platform series which includes the new Opteron 6100 8-core and 12-core processors. These are the world’s first 8- and 12-core x86 server processors and come with a host of new features including 4 memory channels, HyperTransport™ technology 3.0, a fourth HyperTransport technology link for better processor-to-processor communication in 4P servers, and new power management features that allow for increased performance when compared to previous generations. The chips themselves began shipping last month but AMD has waited until nearly the end of Quarter 1 to make them official so that Original Equipment Manufacturers(OEMs) would be ready with Opteron 6100-powered machines.

The Opteron 6000 platform targets the 2P and 4P market and is aimed to be used for virtualization, database, and high performance computing applications. Apart from the new CPUs, the Opteron platform features the G34 socket and 5600 Series chipset with I/O virtualization capability, HyperTransport 3.0 and PCI Express 2.0. The Opteron 6100 processors are manufactured on 45nm technology and boast four HyperTransport links, a 4 channel integrated DDR3 memory controller, up to 12MB of L3 cache, and is up to 88% faster than the previous generation of processors.

In today's economic climate, AMD has decided to downplay maximum performance in favor of improved power consumption and a cheaper MSRP. Customers are simply looking for more, not less, in their IT dollar and AMD boasts maximum performance. When placed next to servers of comparable power, competitors fall short in pricing. Based on this graph, consumers are paying 42% more money for the honor of a slower processor. In the power consumption field, AMD clearly shows their efficiency by beating out an Intel 130W Thermal Power Design (TDP) part with their own 80W Average Power Consumpution (ACP) part in terms of performance. Effectively AMD has doubled the cores while staying in the same power and thermal range as previous generations.

In addition to AMD's aggressive 2P pricing, they have upped the value and stripped away the “4P tax.” Long gone are the days when customers are required to pay a premium in order to buy a processor capable of scaling up to 4 CPUs in a single platform. As of today, the 4P "tax" from AMD is effectively $0 but the same cannot be said for other competitors.

"As AMD has done before, we are again redefining the server market based on current customer requirements," said Patrick Patla, vice president and general manager, Server and Embedded Divisions, AMD. "The AMD Opteron 6000 Series platform signals a new era of server value, significantly disrupts today’s server economics and provides the performance-per-watt, value and consistency customers demand for their real-world data center workloads."

The Opteron 6000 platform has already been adopted by HP, Dell, Acer Group, SGI and Cray with many more expected.


Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Wednesday, March 24, 2010

Fujitsu Introduces Xeon Based Primergy System

Following the release of Intel's next generation Xeon 5600 server processor, Fujitsu America has joined the ranks of server partners looking towards the cloud. The Japanese based Fujitsu plans to roll out its new Xeon 5600 equipped Primergy systems through its American counterpart specifically targeting cloud computing environments.

The Primergy CX1000 system can hold up to 38 of the 1U CX1000 rack systems which, according to Jon Rodriguez, senior product manager for Primergy at Fujitsu America, allows for a more efficient high-density computing system. In addition, the Primergy systems sport a new cabinet design featuring shared power distribution and new cooling components. The motivation behind this new design was to eliminate traditional "hot aisle - cold aisle" setups seen in many datacenters and to allow the Fujitsu cabinets to be placed back-to-back.

Rather than placing a power supply on each rack, Fujitsu chose to instead implement a central power supply that will feed each individual rack. Also, the backs of the cabinets have been sealed off and large fans and exhaust vents are now located on the top of the server. Like previously mentioned, these racks can be placed back to back allowing for a more efficient use of space in the data center.

According to Fujitsu, these cabinets are up to 20 percent more power efficient than other comparable server systems due to their Cool-Central design which dictates how air flows throughout the cabinet. Essentially this allows the system to separate heat from various components and dictates where fans are placed for optimum air flow. Target markets for the CX1000 are going to primarily revolve around cloud computing providers and hosts, businesses looking to reduce costs by deploying their own cloud servers, Web 2.0 environments, and high-performance computing markets.

The Primergy CX1000 systems will be available from Fujitsu America resellers by the end of march. A fully loaded rack with 38 servers, a single processor per socket, and 16GB of memory will run in the ballpark of $89,000 per rack. Of course, price will increase as more CPUs, hard drives, and memory is added.


Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Saturday, March 13, 2010

Netapp's New Cloud Computing Management Solutions

Faced with today's increased economic pressures, many IT organizations are turning towards cloud computing as a means to help reduce costs and improve efficiencies in their data centers. Service providers play a very important role in this migration to the cloud by helping customers understand these benefits and by delivering a wide range of IT services via the cloud. Last week Netapp unveiled new design guides and capabilities geared specifically towards service providers with the goal of helping them deliver greater value to their cloud customers. Furthermore company officials said their new tools will fulfill the dual role of delivering cloud applications and services to their enterprise clients while also increasing functionality and security for service providers building their own cloud environments.

NetApp Service-Oriented Infrastructure (SOI): The SOI leverages NetApp storage and serves as a standardized and unified infrastructure. This gives service providers the ability to consume and deploy storage, bandwidth, and resources in a repeatable manner which helps speed time to market, improve flexibility, reduce costs, and increase service levels for their customers.

Data Protection as a Service (DPaaS): NetApp now provides a design guide that enables service providers to rapidly and effectively deploy archive and disaster recovery services. This includes NetApp technologies such as FlexClone for improved disaster recovery testing, SnapLock for compliance, and MultiStore for secure multi-tenancy. This DPaaS cloud design guide will help service providers reduce costs and complexities as well as increase flexibility.

Backup/Recovery as a Service (BRaaS): NetApp has teamed with Asigra, a leading provider of cloud backup and recovery software in order to quickly and efficiently deploy BRaaS solutions to providers. The Asigra Cloud Backup software runs on the NetApp SOI, combining to offer a truly scalable and secure backup recovery solution for the cloud.

NetApp Open Management: NetApp's open management capabilities now allow service providers to leverage NetApp's storage capabilities, regardless of whether service providers use NetApp or another IT service's virtualization framework. This enables service providers to link their IT service management and orchestration portals easily to NetApp's storage automation engine for seamless storage and protection services.

"NetApp has a proven track record of successfully teaming with leading service providers to power their cloud service offerings," Patrick Rogers, NetApp's vice president of solutions and alliances, said in a statement. "Our strategy in this space is to enable the success of our solution partners, not compete with them, and through them provide a broad and open set of industry cloud services for enterprise IT customers."

For more reading see: Why Rent A File Server.


Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.