Monday, December 27, 2010

VMware to Bring Virtualization to Your Smartphone?

Virtualization first struck on the workstation, then came server virtualization, and then virtual desktops. By now, virtualization is seen as ubiquitous in data centers and is also fairly well understood. However, it is not widely used on the desktop. But it seems that virtualization doesn't want to stop there because VMware, last week, announced that it had partnered with LG to enable virtualization on its Android phones which would make smartphones owned by employees palatable to companies conscious about security.

Where the consumer market is concerned, LG's presence is strong and continuously growing. However, the company is overshadowed by BlackBerry in the enterprise market. That is why LG is working with VMware to integrate its end-user computing technologies into the company's smartphones. According to VMware, this will "enable users to adopt the mobile device of their choice, while allowing corporate IT departments to manage sensitive data on those devices with enterprise-level security and compliance."

The first application of this technology is supposed to be available on smartphones in 2011. Unfortunately, there has yet to be any exact release date or pricing mentioned. This virtualization will, however, enable LG smartphones to run two operating systems like Android and BlackBerry, for example. It will also allow them to run one account in isolation from another. What this basically means is that a user can securely run a network account separate from his account on the same mobile device.

This is a great tool for companies who want their IT techs to have the ability to have access to everything they need whenever they need it. Look for a lot of companies implementing this into their IT departments when it releases in 2011.
SMBnow.com is news of, for and by SMBs!
SMBnow.com... The Small & Medium Business Magazine!

Friday, December 17, 2010

New Server Rental Services from Rentacomputer.com

Network Server Storage Solutions and peripherals are available for rent from Rentacomputer.com. Great for server migration projects. Here is a summary of the products available for rent.
                                                Rack-Mount Server Rentals
Rack Mount Server RentalsRack-Mount Servers are easy to use and highly reliable. These are desirable traits in a server when your company needs a temporary server rental to offset the stress of your owned servers and machines during peak times of business.




Storage Server Rentals

Storage Solutions Rentals

Server rentals are very ideal if your company needs an extra storage unit for company information while you are testing various types of storage solutions. Whether you need a short-term rental, or a long-term lease, our agents can get you the right kind of server for your storage needs.



Server Peripheral Rentals
Server Peripheral Rentals
All server rentals need peripheral rentals. Server peripherals include routers, switches, cables, etc. A Tech Travel Agent can bundle all of these peripherals into a single quote along with your server rental.


For more information visit the Server Rental Page


A Tech Travel Agent from Rentacomputer.com the Worldide Technology Rental Company will schedule installation of projectors, computers, and office equipment on a permanent or temporary basis in over 1000 cities worldwide. Call 800-736-8772


We have 3987 Installers, Technicians and Engineers stationed worldwide to serve you.

Sunday, December 12, 2010

Oracle's New SPARC Supercluster

SPARC SuperclusterWhen the Sun SPARC microprocessor first came out, many people thought that it would be a dead end. Well, Larry Ellison, CEO of Oracle, is on a mission to prove all of those naysayers wrong.

Oracle announced a plethora of new servers powered by SPARC as well as an outline for future development of SPARC. Among the new servers there is an SPARC based Exalogic Elastic Cloud and the newly unveiled SPARC Supercluster. It seems that the new SPARC servers are being optimized for the upcoming Solaris 11 Unix os as Sun's hardware and software portfolios undergo an update from Oracle.

According to Ellison, "For all our competitors that have been enjoying their Sun down and Sun set programs, this is the end of that. The Sunrise program is all about SPARC and Solaris, those two foundation technologies are going to lead the industry into the next generation of engineered systems."

One of the new SPARC systems announced by Oracle is the Exalogic Elastic Cloud server which is powered by SPARC. Back in September, Oracle debuted an x86 based Exalogic server at OpenWorld. The Exalogic server is a middleware enhanced cloud-in-a-box solution that is specifically designed for Java applications. On the flipside, the SPARC version is powered by a 16 core T3-1B SPARC processor.


Oracle is also making a general purpose computing platform with the new SPARC Supercluster while Exalogic is focused on Java middleware performance. "The Supercluster is a general purpose server that will run your middleware, your customer apps and your database extremely well," Ellison said. "It runs your database faster than anyone has run any database before."

Aside from talking about the new SPARC T3 processors in the Exalogic and Supercluster platforms, Ellison also talked about the benefits of InfiniBand, which is used in both systems in order to approve overall performance. InfiniBand is typically seen in high performance computing systems and offers lower latency than your traditional Ethernet configurations.

According to Ellison, "We think InfiniBand is dramatically better for linking servers to other servers and servers to storage than Ethernet. We certainly have Ethernet connectivity to these boxes, but when these servers are talking amongst themselves and talking to storage, they're going through a high performance, reliable and guaranteed delivery network called InfiniBand."

Oracle is continuing to move ahead on SPARC performance beyond the current generation of T3 processors. According to Ellison, "The T4 is alive in the lab delivering a lot better single-threaded performance than the T3. In T3 we focused on adding more cores, and in the T4 we're trying to make our single thread performance better and it looks very good right now."
Rentacomputer.com Delivers to Las VegasPlanning on a convention, trade show or conference?



Consider Renting your AV Equipment.



Save time! We set it up and save you money. Save up to 50% off the rates that most hotels and convention centers charge.



Rentacomputer.com has a wide range of AV rentals delivered and installed worldwide. Call today at 800-736-8772.

Saturday, November 27, 2010

5 Awesome and Free Server Tools


Servers are one of the most important things for any company, but what is more important is making sure that your server is running as it should. In order to do so, however, you need two things: a good IT tech and the right tools. A good IT Tech isn't really hard to come by, especially in today's business field where it seems like everybody and their mother are going to school to learn about computers. Tools, on the other hand, are a different story. While the tools themselves may not be hard to come by, knowing which ones to get can be.

There are many different types of tools you can download off of the internet in order to help with your server management. Some are good, some are bad, but the best ones may even be free. Take a look at some great server tools your company needs that are completely and utterly free.

Nagios
Nagios is an enterprise infrastructure monitoring suite. Besides being free, Nagios is also mature and commercially supported all around. The tool has grown from being a simple software project to a major contender in the contemporary network management field. Companies like Domino's Pizza, Ericsson, ADP, Wells Fargo, Citrix and even the United States Army are all avid users of Nagios.

Apache
If you think the Apache project is just simply a web server, then you are wrong. The project, which goes by the formal name of the Apache Software Foundation (ASF), consists of nearly 100 unique projects which are contained under Apache.

PSTools
If you want a suite of very useful command-line Windows tools which are considered essential to survival in a Windows-infested network by IT professionals, then PSTools is the tool for you. PSTools provides incredible automation tools, not to mention that it is considered to be the single best free toolset for Windows available. You can grab this handy tool for free directly from Windows.

Wireshark
Wireshark is a must have tool for anybody who runs a network, regardless of size. Wireshark is a network packet capture and analysis program that is here to assist you in achieving a trouble-free network. Network problems will not be prevented by Wireshark, but the tool will allow you to analyze your problems in real time and, hopefully, avoid future problems.

SharEnum
Most people will consider SharEnum a little obscure, but for those who use it the tool is very useful. SharEnum shows you all the file shares on your network as well as their associated security information. This tool is a very valuable security tool and is also free from Microsoft.

So there you have it, five incredibly useful and incredibly affordable (come on, they're free) tools that every person should have for their servers. IT professionals from all over use these tools and ones like them every day in order to keep their servers working as well as possible, so why shouldn't you? If you are in need of some good server tools, then start downloading these five freebies now!
SMBnow.com is news of, for and by SMBs!
SMBnow.com... The Small & Medium Business Magazine!

Sunday, November 14, 2010

IBM Cancels Irish Server Works

IBMIn a recent report from the Irish Times, it appears that IBM is cutting off what is left of the server manufacturing jobs at the company's Emerald Isle factory. The reason for the loss is said to be due to the fact that IBM is shifting its server-making to factories the company owns in Shenzhen, China.

With this cutback IBM has lowered its overall workforce in Ireland to around 4,000 employees. This number does include the 190 jobs that IBM just cut as well as 200 employees that the company recently added back in March for a Smarter Planet research center and an additional 100 added to the company's software labs around the country.

IBM released in a statement not too long ago that they were moving their high-end server manufacturing for the Asia/Pacific and EMEA regions to factories in Singapore. IBM did note that they were keeping a "foot in the door" in the European Union by keeping entry and mid-range server manufacturing in the factory IBM has in Mulhuddart, which lies just outside of Dublin.

IBM has also shut down and outsourced all of their x64 server plants located in Scotland aside from very high-end System z and BladeCenter platforms. The Power Unix manufacturing that IBM had in Austin, Texas was also recently moved to Rochester, Minnesota.

Due to this recent cut in jobs, Ireland will be out of the IBM server manufacturing business completely. This also raises questions as to when IMB's mid-range Power Systems will be shifted to China as well. IBM said that they will try to find jobs for as many of the 190 employees as possible. IBM said that all of the 190 employees will receive five weeks of severance pay for every year they served with the company.

According to IBM, "This change will place us closer to our growth markets and suppliers while providing greater operational efficiency and cost savings." The shipping costs to get Power and z10 processors to China or Singapore are fairly small. However, the expensive shipping costs it takes to transport finished Power Systems and mainframe servers to European and African customers is definitely offset by the lowered labor costs of China and Singapore.
Call Rentacomputer.com today at 800-736-8772 if you are in the market for a Nationwide Copier Rental.

Friday, October 29, 2010

10 Reasons to Virtualize Your Infrastructure

Data CentersThere are many reasons out there why virtualization is big right now. Virtualization can save you money, lower the number of physical servers you need, and it is environmentally friendly. However, there are a lot of other reasons you may want to virtualize your infrastructure, especially if you work with virtual machines.

1. Common Management Interface
While it is very awesome and useful to have all of your servers available in a single application, it is even better to have the ability to control those servers from that single interface. Virtualization offers access to virtual machine hardware, consoles and storage, and your entire network of systems is at your disposal.

2. ILO Not Required
If your technicians don't set up your Integrated Lights Out (ILO) interfaces, then virtualization removes that burden for the better. With virtualization you can boot VM from a powered-off state without any need for physical access to the system.

3. Easy Hardware Changes
Most companies dread upgrading their systems and changing their hardware. Getting into all the nooks and crannies of your infrastructure is no picnic. And if your hardware doesn't work, then you have to repeat the process all over again. No thanks. With virtualization you can upgrade memory, increase the number of CPUs or even add new hard disks to a VM with some simple mouse clicks.

4. Snapshots
VMs have the incredible feature of having a snapshot capability built-in. A snapshot is an exact copy of your working VM prior to doing something to it that has the potential to make it not work. However, with snapshot you can revert to the snapshot and remove the faulty VM.

5. Prototyping
Using a standard VM, you have the ability to prototype an application, database or operating system enhancement without spending hours trying to rebuild the physical system in your head before the unsuccessful attempt.

6. Fast System Communication
Host-to-guest as well as guest-to-guest communications occur without any standard physical hardware restrictions. Private VLANs create a system-to-system communication that is secure as well as fast. By using a private VLAN for a group of VMs you can create a multi-tier application with limited outside network exposure and without a lengthy set of allow and deny network rules.

7. Easy Decommissioning
There is a lot that goes into decommissioning a physical system. You have to turn off network ports, wipe the disks, unplug the system, remove the system from the rack and then dispose of the system. A VM's decommissioning process does basically the same thing only you do not have to be at the actual data center. There are also no systems to remove or return. Removing your VM from inventory will take you a mere couple of seconds.

8. Templating
It takes one gold disk for every new type of hardware that you incorporate in your network to support a data center. With VM it takes one Windows Server 2008 R2 VM for everything. You only need one template that contains everything needed for deployment.

9. Fast Deployment
VMs do not require shipping, do not require installation, do not require power hookups, do not require network drops and do not require SAN cabling. By using templates or staged ISO images, a VM's deployment can take only minutes or at most a few hours.

10. Dynamic Capacity
With a traditional system you will have to plan far in advance to scale-up for a major marketing campaign that requires new physical computing capacity. However, virtualization allows you to rapidly respond to changing business conditions. You can scale-up whenever you need the extra capacity and even scale back whenever you don't.

It seems that virtualization can do a lot for your company's infrastructure by making things a whole heck of a lot easier. If it is within your company to do so, maybe virtualization is the perfect next step for you. Just make sure you do all your research first.
Visit Rentacomputer.com today or call at 800-736-8772 if you are in the market for a nationwide Laptop Rental.

Sunday, October 17, 2010

4 Data Center Migration Mistakes You Need to Avoid

Data Center MigrationNo matter what part of business you are in, there are going to be some mistakes that you can potentially make along the way. Some of these will sneak up on you and are completely unavoidable. However, there are also some pretty big mistakes that CAN be avoided and need to be if you want things running smoothly.

In the world of Data Center migration, there are four big mistakes that you can make that are fairly common and potentially career damaging. Thankfully though, these mistakes, if caught soon enough, can be avoided. If you can identify these mistakes and avoid them before they happen, then your data center migration will go a lot smoother and so will your job.

For most companies moving a data center is a huge ordeal. In addition to that, a successful move can also act as a nice resume booster for any IT professional. A move that goes according to plan can showcase an IT professional's skills in large-scale project planning, project management, technology integration and even interpersonal communications. Migrating a data center provides a chance for exposure across the entire company considering nearly every department is touched by the IT organization, as well as affected by a data center move in some way or another.

The only problem is that a data center move can be littered with problems. Not just problems but potentially career-ending failures. There isn't an IT professional out there that wants to be on the receiving end of memos and discussions describing lost orders, missed deadlines or customer dissatisfaction that occurred because of something that got disrupted by a data center move that didn't go as planned.

There a four key mistakes that can occur in a data center migration that could spell doom for your job: Ignoring the Data, Combining the Move with Other Projects, Failing to Plan Appropriately and Not Creating an Inventory of Equipment, Applications and Processes.

Taking the time to address these issues one by one up front will significantly improve your chances of successfully migrating any company's data center. By having success, the personal recognition that is sure to come afterward is one of the best payments you could receive, aside from your actual paycheck.
Relax, enjoy your business trip, company project, convention or trade show. Your Tech Travel Agent will handle your computer rental needs including delivery, installation and pickup after the rental.

Thursday, October 7, 2010

Yahoo Computing Coop


Recently in Lockport, New York something strange was built. Reminiscent of barnyards, the newest building by Yahoo is not really for chickens but in fact is a data center. Yahoo plans to change the game when it comes to designs of data centers with their new coop design that is very environmentally friendly while at the same time saving themselves some big bucks.

Huge and metallic, the data center looms over the fields in Lockport, but for once the presence of a huge data center doesn't mean a drain on power or tons of money spent to keep it running. The Yahoo Computing Coop is built similarly to a chicken coop, hence its name, with slatted walls that take advantage of the year-round cool weather in Lockport. The cool air flows through the open-air data center and cools the servers without the presence of a chiller for cold water cooling. This design eliminates one of the most costly and energy-intensive factors of a data center.

To power the center the Yahoo Computing Coop doesn't rely on the usual method of coal-burning electricity but instead relies on the local hydroelectric power of Niagara Falls. While it costs a bit more for Yahoo initially, using hydroelectric power is an environmentally friendly concept that should pay off in the long run. Yahoo isn't doing it all for the environment, however. By using only 10 percent of its power for cooling Yahoo can expect to cut its electricity bills dramatically.

The Yahoo Computing coop consists of three data center halls connected to a central operations center with two more halls being built. When the two additional halls are finished, the center will have 36,000 square feet of space, enough to house 50,000 servers. Yahoo hopes to eventually house up to 100,000 servers at its Coop with the addition of more halls.

It seems like no wrong can be done with this data center. It is green, efficient, cost effective, and is also opening up close to 125 jobs for the area. While Yahoo scrambles to patent its design, similar centers are being built by Hewlett Packard and Google. The Yahoo Computing Coop will run applications like Yahoo Mail and Flickr and will hopefully start a new trend in green server housing.
Do you have a classroom computer training session coming up soon? Reserve all the computer equipment you need at a guaranteed great rate. Get a Classroom Training Rental Equipment quote here.

Saturday, October 2, 2010

Server OS Landscapes Going with the Flow

Server Room CageThe world of UNIX and Linux server operating systems right now is anything but boring. However, that may not be the best thing, especially for enterprises that want a background of stability and certainty when they choose a server OS to power their business.

If you use Sun UNIX, then you know all about this. The OpenSolaris project just recently disintegrated into nothing after a long run of uncertainty and was replaced by something probably based on the Illumos project like the OpenIndiana spork. Users of Solaris weren't greeted with such a rude awakening though. Their enterprise OS hasn't actually gone away. They have, however, come to terms with the fact that UNIX is now a product of Oracle which means it is being developed along a very, very different style then it was under Sun.

The Suse Linux Enterprise Server (SLES) is under Novell and is also one of the two leading open source server distributions. The server itself runs just fine but, being owned by Novell, which is known for being a little chaotic, has cast a shadow over the product.

On a lighter side, if you are a Red Hat shop, you can rest assured that you are running the number one open source server OS from a dependable and stable company. In fact, Red Hat Enterprise Linux (RHEL) is respected so highly that Oracle uses it as a basis for its own Linux offering.

But how long will this last? Oracle has decided to drop Red Hat compatibility in its Oracle Linux Product after announcing the Oracle Unbreakable Kernel for Oracle Linux at Oracle OpenWorld last week. According to Oracle, it is a "fast, modern, reliable kernel that is optimized for Oracle software and hardware." Oracle also promises that the new kernel will offer a 75% performance gain demonstrated in OLTP performance tests over a Red Hat compatible kernel, a 200% speedup of Infiniband messaging and 137% faster solid state disk access.

It is rumored that VMware may buy Novell's Linux business, and if that does happen, then Red Hat is going to be a minnow among sharks in the server OS market going forward. To put it into perspective, Solaris is a part of a $140 billion Oracle Corporation while SLES would be a part of a $36 billion VMware. As for Windows, AIX and HP-UX, they are each owned by corporations worth some $220 billion (Microsoft), $166 billion (IBM) and $90 billion (HP) respectively. Red Hat is definitely the odd one out with only $7 billion.

That leaves IBM, HP and Microsoft. All these companies are fairly predictable and boring, but they are also huge. However, with all that is going on in the enterprise operating systems market at the moment, big, boring and predictable may be the perfect thing for many potential customers.
Call Rentacomputer.com today at 800-736-8772 if you are in the market for a Nationwide Copier Rental.

Wednesday, September 22, 2010

HP ProLiant MicroServer


Small businesses today are becoming more and more technologically based, and with the switch from paper and ink to computers the need for electronic storage and organization grows. Hewlett Packard's new ProLiant MicroServer is the newest solution for small but growing businesses to stay connected and organized without a large unnecessary server.

Usually setting up a server can cost a massive amount of money that an up-and-coming business just doesn't have. With the HP ProLiant MicroServer a business can have a central server without spending much at all. At $329.99 a ProLiant MicroServer costs about as much as a desktop PC.

The basic specs for the Proliant are as follows:

Processor
Processor family-AMD Athlon™ IIhttp://www.blogger.com/post-create.g?blogID=5274578954195612960

Number of processors-1

Processor core available-2

Memory
Maximum memory-8 GB

Memory slots-2 DIMM slots

Memory-PC3 DDR3

I/O
Expansion slots- 1 half-height, half-length PCIe x16 Gen 2, 1 half-height, half-length PCIe x1 Gen 2

Network Controller-1GbE NC107i 1 Port

Storage
Maximum Drive Bays- 4 LEF SATA

Supported Drives- Non hot-drive 3.5 inch SATA

Storage Controller- Integrated 4 port SATA RAID

Small, efficient, and easy to use the HP ProLiant MicroServer caters perfectly to the needs of businesses with around ten clients. Though it performs well on a small scale, the ProLiant MicroServer is strictly a starter server. Bigger businesses that need more power would do better to spend a little more and buy a server that can handle a bigger workload. The HP ProLiant MicroServer is made for ease of use in smaller environments, and performs perfectly in this niche.
Looking to acquire an HP Laptop Rental? If so, then head on over to Rentacomputer.com or call 800-736-8772 to schedule your rental today.

Friday, September 17, 2010

KVM: Your Gateway to Open Source Server Virtualization

Red Hat KMVThe thought of switching to a virtualized infrastructure sends a shiver down the spines of most CIOs. Things like security concerns, performance uncertainty, and scalability questions are many examples of things that make the physical-to-virtualization fear so prominent. However, the Kernel Virtual Machine (KVM) from Red Hat is poised to put an end to those fears.

KVM runs along the same lines as Citrix XenServer, Microsoft Hyper-V and VMware ESX/vSphere. Just like all of these, KVM is a full virtualization technology. What that means is that virtual machines (VMs) built with KVM fully abstract computer hardware allow the operating systems to run believing that they are running on physical hardware. Memory, CPU, disk, peripherals, NICs and graphics adapters compose VMs using full virtualization technology.

The biggest thing talked about when thinking of moving to a virtual infrastructure is definitely security. Virtualization, as well as cloud computing, have received negative remarks from techies and industry participants. However, VMs are not less secure than your physical machine nor are they any more secure. Just because they are virtual doesn't mean anything is changing on the security front.

If you switch to virtual, you must still take the same security precautions that you would with a physical machine. You will need to cut out unneeded services, throw on some anti-virus protection, install a few security fixes and provide firewall protection for all of your VMs.

Performance is another issue people bring to the table. People seem to think that going VM means you have to sacrifice performance. Untrue. Red Hat boasts that the highest computing workloads (Excahnge, SAP, Oracle and Java) experience performance that is at least 90% better than that of physical machines on KVM. Some workloads, like LAMP (Linux/Apache/MySQL/PHP) workloads, experience up to 140% greater performance on KVM.

Probably the last thing people throw into the virtualization debate is scalability. KVM's multi-core technology exploitation makes it exponentially more scalable than adding a bunch of under-utilized physical machines to your data center matrix. VMs are able to handle workloads with ease in stressed environments.

KVM gives you anything and everything you could need with the familiarity of Red Hat Enterprise Linux (RHEL). Now I know a lot of people out there are like me, a "try before you buy" type of person. Well, for people like us, you can download and use KVM as Promox, which is not affiliated with Red Hat. This combines containers and KVM into a single hypervisor package.

KVM is definitely a major contender in the enterprise virtualization market. It is capable of holding its own against VMware vSphere, Microsoft Hyper-V and Citrix XenServer. KVM has good performance, security and scalability which should quash any fears you may be having about switching to virtualization technology.
If you need a BenQ Projector Rental, then Rentacomputer.com is the place for you. Call us today at 800-736-8772 to get your BenQ Projector Rental.

Wednesday, September 8, 2010

Supplies To Build A Server


Servers can be bought, but for the tech-savvy it may make more sense to build one. With a few essential pieces of equipment and a decent internet connection a person can save quite a bit of money going the do-it-yourself route. Here are a few things someone would need to get started.

Dedicated Computer

While a server can be used as a computer, a server is more open to the internet than a dedicated computer would be. With good security software a dedicated computer can be locked down and much more secure than using a server as a desktop computer.

Server Software

Server software comes in a lot of flavors, and to run a server most efficiently it is important to choose the right software for what the server is being used for. If the server is being used for gaming, the software will need to be obtained from the gamer's publishers, while if serving a website, an open source program such as Apache can be used.

Internet Protocol (IP) Address that remains static

IP addresses are used to identify computers when they are logged on to the internet and usually are dynamic, changing every time the internet connection is reset. This is fine for servers simply being used to connect multiple computers or for gaming, but if the server is hosting a domain name (www.____.com), then it will need to have a static IP address. Having a dynamic IP address when hosting a domain name can cause problems for people searching for that site.

Internet connection with fast upstream speed

The internet connection used for a server can either be dedicated to the server or shared among other computers, but a huge factor in choosing internet service is the upstream speed. Internet users typically do a lot more downloading than uploading, so most internet providers have changed their services to match these needs. People building their own servers should do a little research before choosing a provider to find one that has enough upstream bandwidth.


Security Software

Having a server opens up gateways to the internet that weren't there previously, as mentioned above, so it is crucial to have reliable antivirus software and firewall settings. Users need to be certain to have the security software on every desktop computer and laptop in the network to guarantee a safe connection.





Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Friday, September 3, 2010

Intel's Xeon 5600-Series Server Processor

Intel 5600-SeriesA mere 5 years ago Intel completely took the way desktop computing was headed and turned it on its head when they introduced their first dual-core Pentium processors. However, Intel soon realized they were "going against the grain" by trying to push frequencies beyond 10GHz. So Intel shifted focus from surpassing to equalizing.

The only problem with this is that servers and workstations were already using multi-socket configurations to get things moving faster. At this point, Intel's Xeons were getting royally beaten by the Operton from AMD. The Xeons were single-core processors in dual-processor boards that were only slightly aided by the same Hyper-Threading technology we know of today.

It is true that the incorporation of threaded software has been slow for the desktop market whereas business-class workstations have been enjoying multi-core CPUs for quite some time. The cost savings of switching from a single-core, dual-socket system to a dual-core, single-socket box is intense.

As hardware gets more and more powerful, software changes to take advantage, necessitating even more capable hardware. Intel launched their Xeon 5500-series CPUs for dual-socket servers and workstations. The 5500-series was characterized as the most important introduction in more than a decade, and it definitely was for Intel.

AMD had an architectural advantage by using HyperTransport, which was especially pronounced in multi-socket machines. On the other side you had Intel, who still relied on shared front side bus bandwidth for processor communication. With the introduction of the 5500-series, Intel addressed their weakness via QuickPatch Interconnect which added Hyper-Threading and Turbo Boost to help improve performance in parallelized and single-threaded applications.

But Intel wasn't finished yet. This year's switch to 32 nm manufacturing allowed Intel the opportunity to add complexity to their SMB-orientated processors without altering the thermal properties. This is where the Xeon 5600-series comes into play, which supports up to six physical cores and 12MB of shared L3 cache per processor all within the 130W envelope that was created by the 5500-series.

Intel has announced that the latest 5600-series is not a contender in the workstation market right now. In order to be competitive in that market, Intel would have to pair competent processors with no less than fairly-modern core logic. Regardless, there is still plenty of hardware to compare, including a Core i7-980X. The Xeon 5600-series Server Processor is on sale now for a hefty $1,700 and is definitely one of the best servers on the market today.


Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Wednesday, August 25, 2010

Three reasons to consider Windows Home Server


In homes today multiple computers are commonplace. The children may have laptops for school work while each parent has their own desktop. When it comes to storing files and media sharing among computers, a family may want to look into having their own dedicated home server. With Windows Home Sever, it is easier then ever to set up a server for a family home. Below are a few qualities of the Windows Home Server OS that make it desirable for home use.

File sharing made easy

With Windows Home Server every computer in the house can access the server, all the way up to ten PC's. The network administrator will have access to all of the files as well as a secure password protected folder for storing personal media and files. PC's will read the server as a regular network-formatted storage device, making it easy to clear out clutter on personal PC's.

Connect to an Xbox or PS3

The latest version of Windows Home Server includes Windows Media Connect UPnP, which makes it easy to connect to video game consoles such as the Xbox 360 and Playstation 3, and once connected, movies and other media can be streamed from the home server to the consoles. This makes it a breeze to watch a movie stored on the server on a television, or play a music play list without burning it to a CD. Window Home Server's Power Pack 2 update added support for MP4 files and metadata as well, so it is even easier to share among other devices no matter what the file format.

Backups are simple for everyone in the house

Usually the main reason for having a home server is to be able to have all files backed up, and Windows Home Server makes it simple to keep all files from being lost. Daily backups can be scheduled for the whole drive on each computer connected to the server. There is also the option to exclude single folders from the backup. All files are stored as they were originally, not as backup image files, so a Word file will remain a Word file, and a jpeg will remain a jpeg. Windows Home Server also lets the administrator view the statuses of the firewall and antivirus on all computers linked to the server.

With these features, plus many more, Windows Home Server is something to look into for households with more than one computer. It helps keep all files backed up, in case of hard drive failure, and makes sharing and streaming very easy.



Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Friday, August 20, 2010

Quit Dealing with Old School Server Management

Server RoomHearing somebody talk about walking up to a server system to install an operating system may sound the same as somebody saying they had to get up and turn the channel on the actual television set instead of using a remote. To many, this concept may seem aged and outdated but it isn't, it is still going on today. It's actually more prevalent than you might think.

Contemporary data centers nowadays brag about high security with retina scanners, powerful magnetic locks and temperatures cold enough to make ice cubes. However, in a lot of cases these systems lack the necessary connectivity to manage all those systems remotely. Thankfully it isn't too late to fix this situation thanks to out-of-band management.

Out-of-band management involves using a dedicated server port connected to an IP network that allows administrators to work with a system regardless of the power state. To put it more simply, out-of-band management allows you to work with a system as if you had physically walked up to the actual console. You can power the system on and off, change BIOS settings and set up RAID devices using this remote management option.

It used to be that you packed up your collection of CDs, floppy disks and your laptop and headed for the data center in search of the needy server system. You would typically waste an hour gaining access to the data center floor and finding the system you needed. It also used to take a few minutes to figure out if the server you were working on was in fact the correct one and if it was cabled correctly.

Once you got started, it would generally take you around three hours, including reboots, to install the OS, patch it, configure it and ready it for remote access through VNC or Terminal Services. It was only after all of this that you could head back to your desk to finish the project that would most likely suck out the rest of your day.

Integrated Lights-out Management (ILOM) removes the need to walk to and physically touch every server system in the building. ILOM provides an integrated, free and powerful management method. ILOM comes standard with most contemporary racked and blade systems and delivers remote keyboard, video and mouse. This allows you to completely manage your system from power up, through the whole boot sequence and into the operating system.

Setting up ILOM is pretty simple although it does require a slight bit of planning. You will have to decide on a static or dynamic IP addressing for the service and if the ILOM network will be isolated or open. A static IP address requires more setup and more management but does have the advantage of having the IP address tied to a specific system for the life of that system. Dynamic addressing requires less management and setup on the system side, however, you will need a server dedicated to assign and track those dynamic addresses.

An isolated ILOM network prevents any unwanted connections by anybody who is not an administrator. If you isolate your ILOM network, it will also prevent IP addressing confusion with primary production, secondary production or backup interfaces. Data center management should only require the configuration of your server's built-in ILOM ports, so save yourself the physical trouble and get integrated lights-out management.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Thursday, July 15, 2010

China's Right on the Tail of Jaguar Supercomputer in Top500

Nebulae Supercomputer Second Fastest in the WorldChina’s ambition to become a major power in the supercomputing arena has become plainly obvious with the introduction of a supercomputing system nicknamed "Nebulae", which has earned the title of second-fastest supercomputer in the world with a Linpack performance of 1.271 PFlop/s. The Nebulae system itself is a hybrid design comprised of a Dawning TC3600 Blade system with Intel X5650 processors and NVidia Tesla C2050 GPUs. Despite being ranked #2 on the Top500, Nebulae is currently praised as being the fastest system worldwide in theoretical peak performance which is rated at 2.98 PFlop/s, the highest ever seen in the Top500. For a quick reference at just how fast this system can crunch numbers, a single minute of calculations from the Nebulae system would take your home computer over three weeks to complete.

Currently the United States still dominates the list, holding the #1 spot with its Jaguar Supercomputer housed at the Oak Ridge National Laboratory in Tennessee which has a peak performance of 1,750 trillion calculations per second. By comparison, the Jaguar System is over 33% faster than the Chinese contender but pales in comparison in theoretical yield which only reaches 2.3 petaflops. In addition to the Nebulae system, China has a total of 24 high performance systems on the Top500 with the Tianhe-1 supercomputer ranking in at number seven.

China is without a doubt rapidly becoming a major player in high performance computing and is seeking to solidify its holdings in the supercomputing world. Currently it is rumored that Dawning, the company responsible for the Nebulae machine, is currently developing an even faster machine for the National Supercomputer Center in Tianjin, China. The main purpose behind this machine will be to model industrial research such as aircraft design, aerospace fundamentals, and petroleum exploration. In a stark contrast, many of the US machines which are owned by the government are used to monitor nuclear weapon stockpiles.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Friday, July 2, 2010

Oracle Unveils New High-Performance Sun Fire Clusters

Oracle SunFire x86
Holding true to CEO Larry Ellison's promise that Oracle would primarily focus on the high-performance server market, last Monday Oracle expanded its line of x86 Sun servers with new rackmount, blade and network clustering servers. The newly released Sun Fire x86 Clustered Systems are intended for massive server configurations and will deliver a smaller footprint than previous generations of Sun File Server hardware. The servers themselves will come equipped with Xeon 5600 and 7500 processors, the latter is aimed at mission-critical systems which must always run and never go down. In doing so Oracle is competing against Intels Itanium processor line and even Sun's own SPARC line which has raised a few eyebrows.

The Sun Fire x86 Clustered Systems are designed for customers that run a mix of Oracle and non-Oracle enterprise workloads across a variety of systems. The cluster servers themselves consist of five rackmounted and two bladed servers that can hold two to eight processors and have been tightly integrated with Oracle software, middleware and management applications. Oracle has assured their customers that all Oracle software and middleware has been certified to run on these systems which have been optimized for Oracle Solaris, Oracle Enterprise Linux, and Oracle VM which supports Red Hat, Suse Linux, and even the KVM hypervisor.

In addition to optimizing their software and operating systems, Oracle has integrated Sun's Ops Center with its own Enterprise Manager providing a dynamic work flow for a single "lights-out" point of management. Blades, servers, storage, networking, virtualization and even powering the systems on and off can be handled all from a single web browser. Oracle has even included an Integration Assistant so you can configure and boot the systems straight out of the box within minutes. Even with no OS installed, the systems can reach out across the Internet to Oracle servers to check for firmware and BIOS updates and then download and install them.

"We claim we can manage a full blade ecosystem without requiring any network skills, because network virtualization is done in the silicon and through Oracle middleware technology," - Dimitris Dovas, director of product management for Sun hardware at Oracle

Oracle claims that this hardware will be able to deliver up to a 45 percent improvement in energy efficiency over previous generation systems and also the ability to run 70 percent more workload. They also claim that their new hardware can deliver the same performance in one-tenth the space with one-tenth the devices and one-fifth the number of network cables. The simpler cabling on the Sun Fire systems come through oracles Sun Blade 6000 10 GbE switches which are embedded within the blade itself or inserted through the top of the rack to connect the clusters. This improvement allows the server clusters to communicate to a network without having to connect though a networking cable and a switch.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Monday, June 7, 2010

1 Petabyte Holographic Storage Coming Soon?

1 Petabyte Holographic Storage Disc
Just a few years ago InPhase announced its revolutionary 300GB holographic disc which was supposed to change the way we store data in high volumes. Shortly afterwards Call/Recall announced their plans to develop a 1TB optical drive and disk with backwards compatibility with Blu-ray. Fast forward a few years to today and holographic imaging still hasn't caught on. Yet there is another holographic hopeful taking a shot at the ultra high-capacity optical disc market by the name of Storex Technologies and their Hyper CD technology. This time however its not a terabyte they are aiming for, but instead a whopping amount of 1,000,000 GB.

"The company holds patents on glass and glass-ceramics compositions as well as read/write mechanics and optics concept(s) applicable to high-density data storage. Using commercially available low power lasers and optics, capacities of more than 1,000,000 GB (1 PB) can be achieved using a CD size disk of 120mm in diameter and 1.2mm thick."

Storex Technologies was founded by Romanian scientist Eugen Pavel in 2007 and is, in effect, nothing more than a technology demonstration group looking for a partner to invest in their intellectual property. Pavel is known as a reputable scientist and has been conducting research in the fields of fluorescent photosensitive glasses and glass-ceramics for many years.

The idea is based on glass-ceramic discs and laser diodes to record information inside the virtual layers of a CD-sized fluorescent photosensitive glass via 40nm-wide lasers. The layers are said to be 700nm apart - but we don't know how many layers per disc - and data access is said to occur at DVD-like speed. Storex claims a 5,000 year life for the disks, but this is merely a theoretical estimate since there is no physical product available for testing. The technology is still in the earliest stages of development and it may be a long wait before we see the concept in action.

With so many failed attempts to bring holographic disk drives to the market, we sincerely hope this concept doesn't fade into oblivion like the rest.

Looking for other Technology Rental information? Visit the Rentacomputer.com Articles Page for a variety of technology rental ideas.

Wednesday, June 2, 2010

AMD Processors Continue to Dominate Top500 Supercomputers

AMD Based Jaguar Supercomputer

If you recall last year, the Jaguar Cray XT5 Supercomputer topped the list as the worlds fastest supercomputer utilizing six-core AMD Opteron processors. Six months later, AMD continues to hold the reigning spot on the TOP500 Supercomputer List announced just a few days ago at the International Supercomputing Conference in Hamburg, Germany. Jaguar continues to be the world's highest performing system featuring nearly a quarter million cores. The Cray XT5 was improved nearly 70% last year and continues to process 1.75 petaflop per second, up from 1.04 in June 2009.

Additional Top 10 systems based on AMD technology are:

#3: Roadrunner - Los Alamos National Labratory: A hybrid system from IBM utilizing BladeCenter cluster technology in conjunction with AMD Opteron processors and has a processing speed of 1.042 petaflops per second.

#4: Kraken - University of Tennesse: A Cray XT5 system similar to Jaguar which peaks at .83 petaflops.

#7: Tianhe-1 - National SuperComputer Center, China: Another hybrid system using ATI Radeon graphics processors from AMD and has a processing speed of .56 petaflops per second.

The number of AMD technology-based Supercomputers on the TOP500 now stands at 51 with systems that can be found across the globe including in Japan, the United Kingdom, Germany, Switzerland, and Norway. AMD technology currently drives more than 4.2 Petaflops of computing power in the TOP10 alone which is used by universities and national labs to conduct research in engineering, finance, climate predictions, and energy efficient designs. In addition, Cray has recently announced plans for its next-generation Cray XE6 supercomputer which will be based on AMD Opteron 6100 Series processors and have the ability to scale to more than 1 million cores.

“Our customers are selecting AMD platforms for supercomputing because they provide the cores, the memory, the power savings and clearly the performance that the world’s leading research institutions require for their ground-breaking work,” said John Fruehe, director, Server and Embedded product marketing at AMD. “AMD has been a leader in delivering the benefits of x86 and open source computing to the HPC community and it will be exciting to see what further advances the AMD Fusion™ family of Accelerated Processing Units (APUs) will bring.”

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Thursday, May 27, 2010

Bovine Powered Servers

As our demand grows for computing power, energy efficiency, and data storage capacity, the ability to produce the power needed in data centers is simply not keeping up with the times. The notion of reducing energy consumption in data centers has led many vendors to pursue increasingly optimistic and sometimes downright quirky ideas. HP is one such vendor which hopes to use sustainable processes in order to build data centers that are self-sufficient. In other words, construct and design a data center whose electricity is generated from a sustainable energy source and whose heat output can be recycled and reused within that same data center. This model aims to give technology companies more options for powering their servers.

In India, for example, electricity is in such high demand that there just isn't enough electricity to keep many of the data centers that are being built there up and running. "In India they need diesel generators because the power grid can't keep up with the growth," said Chandrakant Patel, one of HP Labs researchers. Patel points out that an enterprising farmer with a few cows could be the solution to this energy crisis and even offer a fresh alternative energy approach for IT managers.

So what exactly does a diesel generator have in common with a cow?

As odd as this sounds, cow manure could be a possible solution for small and medium businesses looking for cheaper real estate and electrical alternatives. With the advent of high-speed networks, there is no longer a need to locate data centers within the confines of a big city and can now be located on cheaper land, such as in the rural fields next to a dairy farm. Your average dairy cow will produces 55 kilograms of manure per day which would generate 3 kilowatt-hours of electrical energy. According to HP, a dairy farm with 10,000 cows would produce enough energy to power a 1-megawatt data center, or approximately 1,000 servers.

This recycling process works as follows. Farms already have manure collection systems which utilize anaerobic decomposition methods to breaks down the cow waste much like a sewage treatment plant would. In current systems the biomass goes into an anaerobic digester and after decomposition is released as simple methane gas. However, in HP's vision, instead of a farm burning off the methane gas for energy, which is one of the most volatile greenhouse gases, the chemical energy in that methane could be converted into electrical energy to power the data center. To complete HP's sustainable and self-sufficient vision, the heat given off from data centers will be reused as part of the energy needed to break down the biomass.



This chart provided by HP Labs shows how cow manure and server heat from data centers can be combined to create a sustainable energy alternative.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Monday, May 17, 2010

Federal Agents Seize $143m in Fake Networking Equipment

Federal Authorities Seize Bogus Internet GearOver the past 5 years Federal authorities have seized more than $143m worth of counterfeit Cisco hardware and labels in a coordinated initiative called Operation Network Raider. The operation depends on the collaboration of several law enforcement agencies including the FBI, Immigration and Customs, and Border Protection Agencies which has so far resulted in more than 700 seizures and 30 felony convictions. Despite costing Cisco and other US networking enterprises millions of dollars in sales and technology, the real threat of these counterfeited routers and networking gear is on the level of national security.

In 2008, Ehab Ashoor attempted to traffic 100 gigabit interface converters that were illegally manufactured in China and contained fraudulent documents indicating they were genuinely produced by Cisco. The equipment was destined for the United States Marine Corps and was intended to be used as communication equipment in Iraq. This month Ashoor was sentenced to 51 months in prison and ordered to pay Cisco $119,400 in restitution after being found guilty of trying to sell the counterfeit gear to the US Department of Defense. In January 2010 a Chinese resident was sentenced to 30 months in prison and ordered to pay a restitution of $790,683 for trafficking counterfeit networking gear.

The prospect that our government and business networks may be at risk has propelled law enforcement agencies to work around the clock to crack down on these illegal distribution networks of bogus routers and switches. According to the Customs and Border Patrol there has been a 75 percent decrease in seizures of counterfeit networking hardware at U.S. borders from 2008 to 2009. Yet it is entirely possible that these scams could threaten national security as well as the financial well being of corporations by infusing critical networks with gear that is unreliable, or worse, riddled with backdoors and security vulnerabilities.

China has a well known reputation for doing whatever it takes to get the competitive edge and it has already been proven do–able by researchers at the University of Illinois that such vulnerabilities could be hardwired into a microprocessor. This hacked microprocessor could then log passwords and monitor networking traffic as well as other sensitive data passed through the equipment. However, Cisco has assured us that so far there is absolutely no evidence that such equipment has been tampered with on any scale to contain backdoors, but it is not entirely out of the realm of possibility.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Saturday, May 15, 2010

EPA Delivers Draft 1.0 for Data Center Storage

EPA Draft 1.0The US Environmental Protection Agency (EPA) is expected to soon establish the final standards for its Energy Star certification for data centers. Currently the organization is holding sit downs with various storage firms and looking for feedback so that it can move forward with more precise standards for data storage systems such as enterprise hard drives and solid state drives. So far the EPA has already established Energy Star ratings for servers but as you can imagine establishing energy standards for storage solutions is a considerably more complex task. Unlike appliances such as a personal computer or printer, the efficiency of a data storage unit can depend on a wide range of variables such as configuration, controllers in use, power supplies and even software.

The EPA has made steady progress since April of 2009 when it first announced it would be moving forward with the program. Most recently the EPA has been collecting data from December 2009 through March 2010 to gain a better understanding of the dynamic between hardware/software configuration and energy efficiency, active and idle state performance, and sensitivity to single-configuration changes. The EPA has released the results of the research at this stage, entitled Draft 1 Version 1.0 Specification, which can be downloaded for free courtesy of Energy Star. If you're technically inclined the report has some pretty interesting results and may be worth the read.

Draft 1.0 comprises the idea of a "product family" certification, due to the fact that storage devices have a greater level of customization and configurability of products. The report also sharpens several key definitions. For example, the definition of a "storage product" includes components and subsystems that are considered an "integral part" of the storage product architecture, but specifically excludes products that are usually associated with a storage environment at the data center level. Only the storage product can be subject to Energy Star certification -- subsystems and components are not eligible for certification. The Draft also defines Active State, Ready Idle State, and Deep Idle State for those that want to take a look. If you happen to have comments about Draft 1, they're due to the EPA by May 21.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Tuesday, April 27, 2010

Former Fujitsu President Sues After Termination

Ex President Kuniaki Nozoe Threatens to SueThe ex-president of Fujitsu, Kuniaki Nozoe, is now threatening to sue the IT services giant for damages over losses suffered by the company and has even asked the corporation to sue some of its own executives. What prompted the legal action from the former president was his forced resignation last September. This March he wrote the company asking that his resignation be nullified and reversed, a tactic which hasn't gone very well for the former president. In response, Fujitsu alluded that he had been forced to quit due to his ties with organized crime. In fact the board said it had previously warned Nozoe that such links were in conflict with "the Fujitsu Way".

Fujitsu first announced the resignation of Mr Nozoe in September 2009 citing health issues. However last month they admitted that the president had been forced out following an investigation into his business links. The investigation found that Mr Nozoe had a relationship with a third party company said to "have an unfavourable reputation" - a common phrase used in Japan to infer that one has ties to the Yakuza. Nozoe stated that his relationship was merely personal but upon being confronted with the allegations the board and Mr Nozoe agreed with one another to issue a statement attributing his departure to poor health rather than blame the unnamed third party. Although Nozoe did not break any laws Fujitsu maintains that he failed his duties as president.

The episode has since raised questions over the role of organized crime syndicates in big Japanese business. "The suggestion that a major Japanese company has been linked with the yakuza is not surprising," said Dr Seijiro Takeshita, a director at the Japanese bank Mizuho International "Associating with gangsters has often been a part of doing business in Japan - including even the banks." The Tokyo Stock Exchange has since given Fujitsu a strict warning over the issue.

Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Thursday, April 22, 2010

The Largest Cloud in the World is Dark, Shady, and Criminally Owned

When thinking of the largest cloud computing network known to man, what companies come to mind? Microsoft? Sure they have alot of computers but not even close. Amazon? Getting bigger but still not even in the same ballpark. Google? As monstrous as their cloud is, its a mere drop in the ocean. The largest cloud in the tech world isn't controlled by a brick and mortar corporation, but rather it is a network of computers controlled by the Conficker computer worm across more than 200 countries in the world. So just how big is the worlds biggest cloud?

"Conficker controls 6.4 million computer systems in 230 countries at 230 top level domains globally with more than 18 million CPUs and 28 terabits per second of bandwidth." said Rodney Joffe, senior vice president and senior technologist at the infrastructure services firm Neustar.

In other words the biggest cloud on the planet is controlled by an unknown criminal enterprise that rents out their botnet to send spam, perform a denial-of-service attack, hack computers, spread malware, and steal personal information and money. In fact, it is believed that much of the comment spam that plagues many blogs is spawned from a portion of the conficker cloud. Put simply, the cloud is "mobbed up."

In many ways, the Conficker cloud is much more competitive than legit vendors. The operators have experience with the virus dating back to 1998 and their footprint is bigger than any cloud previously seen. On top of that there are no moral, ethical, or legal constraints with the added bonus of zero costs. There is even an unlimited supply of new resources readily available as the conficker spreads far and wide to take over and steal more computing power.

Just like legitimate cloud vendors, Conficker is available for rent and can be found just about anywhere in the world a user would want their cloud to be based. Users can choose the amount of bandwidth they want, the kind of operating system they want to use, and even what kind of services will be installed into the cloud such as spam distribution, dos attacks, etc.

By the way, just in case you were wondering, the biggest legitimate cloud provider is Google which is made up of approximately 500,000 systems, 1 million CPUs and 1,500 gigabits per second (Gbps) of bandwdith. Coming in second is Amazonwith 160,000 systems, 320,000 CPUs and 400 Gbps of bandwidth. The third largest legit cloud is owned by Rackspace, which offers 65,000 systems, 130,000 CPUs and 300 Gbps.

Although the last major attack performed by the Conficker cloud occurred over a year ago against the Manchester police department, the virus is still considered a very real and palpable threat. If you fear you are infected by the Conficker virus you can try out this Conficker Eye Chart which pulls images from three sites that Conficker is known to block and displays them in a box. If all the images show up you're in good shape, but if one or more doesn't display it could indicate a Conficker or other malware infection. Be aware that if you are browsing from behind a proxy, you may be able to see all the images and still be infected.


Looking for a short term file server rental for your next proof of concept or data center move? Call www.rentacomptuer.com at 800-736-8772 today!

Thursday, April 15, 2010

x86 Server Market Directs Microsoft to End Itanium Development

Itanium serverMicrosoft has announced that it will no longer support development for Intel's Itanium processor effectively placing current Itanium products into maintenance status for the next three years with support ending entirely in eight years. Microsoft also stated that the current versions of Windows Server 2008 R2, SQL Server 2008 R2, and its developer tool Visual Studio 2010 will be the last versions to support the Itanium architecture. For those wondering exactly why Microsoft would make this move, Joe Clabby, President of Clabby Analytics, offers his thoughts on the decision.

"Here's what really happens: Microsoft has invested in x86 architecture. People don't want Windows on Itanium. They want HP-UX on Itanium and maybe some NonStop and OpenVMS, but they have not done jumping jacks over Windows on Itanium. Microsoft is saying its committing heart and soul to x86 multicore and that's what the market wants,"

While the move is yet another blow to the Itanium line, losing Microsoft is not as painful as one would think. Approximately 80 percent of Itanium sales are from HP, which runs HP-UX, NonStop or OpenVMS. Windows and Unix are merely a small portion of their business. However, the marketplace continues to gravitate towards the architecture proposed by Advanced Micro Devices, which added 64-bit extensions to the x86 processors used by many mainstream servers and PCs. Although Microsoft has offered 64-bit versions of Windows Server for both types of chips, the x64 versions have proven to be far more popular than the Itanium ones. Microsoft's reasoning for the decision seems to be sound.

"The natural evolution of the x86 64-bit ('x64') architecture has led to the creation of processors and servers which deliver the scalability and reliability needed for today's 'mission-critical' workloads," Reger said in a blog post. "Just this week, both Intel and AMD have released new high core-count processors, and servers with eight or more x64 processors have now been announced by a full dozen server manufacturers. Such servers contain 64 to 96 processor cores, with more on the horizon."

Despite waning mainstream support and the fact that Itanium has never been a big seller, the chip remains as an importance figure in the market seeing as its the processing power backing HP's high-end server line. In addition, Intel continues to develop new versions of the processor, most recently the Itanium 9300 which was introduced in February, and has promised at least two more generations codenamed "Poulson" and "Kitson". While the immediate future seems secure for the Itanium series of processors it remains to be seen just how far they will be able to go.


Looking for a deal on a file server just back from rental? Check out the just back from rental computer inventory at www.rentacomputer.com or call 800-736-8772 today!

Monday, March 29, 2010

AMD - "Welcome to the World of 12 Cores"

AMD Opteron 6100 Server ProcessorAMD has kicked off this week by debuting its new "Magny-Cour" server platform series which includes the new Opteron 6100 8-core and 12-core processors. These are the world’s first 8- and 12-core x86 server processors and come with a host of new features including 4 memory channels, HyperTransport™ technology 3.0, a fourth HyperTransport technology link for better processor-to-processor communication in 4P servers, and new power management features that allow for increased performance when compared to previous generations. The chips themselves began shipping last month but AMD has waited until nearly the end of Quarter 1 to make them official so that Original Equipment Manufacturers(OEMs) would be ready with Opteron 6100-powered machines.

The Opteron 6000 platform targets the 2P and 4P market and is aimed to be used for virtualization, database, and high performance computing applications. Apart from the new CPUs, the Opteron platform features the G34 socket and 5600 Series chipset with I/O virtualization capability, HyperTransport 3.0 and PCI Express 2.0. The Opteron 6100 processors are manufactured on 45nm technology and boast four HyperTransport links, a 4 channel integrated DDR3 memory controller, up to 12MB of L3 cache, and is up to 88% faster than the previous generation of processors.

In today's economic climate, AMD has decided to downplay maximum performance in favor of improved power consumption and a cheaper MSRP. Customers are simply looking for more, not less, in their IT dollar and AMD boasts maximum performance. When placed next to servers of comparable power, competitors fall short in pricing. Based on this graph, consumers are paying 42% more money for the honor of a slower processor. In the power consumption field, AMD clearly shows their efficiency by beating out an Intel 130W Thermal Power Design (TDP) part with their own 80W Average Power Consumpution (ACP) part in terms of performance. Effectively AMD has doubled the cores while staying in the same power and thermal range as previous generations.

In addition to AMD's aggressive 2P pricing, they have upped the value and stripped away the “4P tax.” Long gone are the days when customers are required to pay a premium in order to buy a processor capable of scaling up to 4 CPUs in a single platform. As of today, the 4P "tax" from AMD is effectively $0 but the same cannot be said for other competitors.

"As AMD has done before, we are again redefining the server market based on current customer requirements," said Patrick Patla, vice president and general manager, Server and Embedded Divisions, AMD. "The AMD Opteron 6000 Series platform signals a new era of server value, significantly disrupts today’s server economics and provides the performance-per-watt, value and consistency customers demand for their real-world data center workloads."

The Opteron 6000 platform has already been adopted by HP, Dell, Acer Group, SGI and Cray with many more expected.


Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Wednesday, March 24, 2010

Fujitsu Introduces Xeon Based Primergy System

Following the release of Intel's next generation Xeon 5600 server processor, Fujitsu America has joined the ranks of server partners looking towards the cloud. The Japanese based Fujitsu plans to roll out its new Xeon 5600 equipped Primergy systems through its American counterpart specifically targeting cloud computing environments.

The Primergy CX1000 system can hold up to 38 of the 1U CX1000 rack systems which, according to Jon Rodriguez, senior product manager for Primergy at Fujitsu America, allows for a more efficient high-density computing system. In addition, the Primergy systems sport a new cabinet design featuring shared power distribution and new cooling components. The motivation behind this new design was to eliminate traditional "hot aisle - cold aisle" setups seen in many datacenters and to allow the Fujitsu cabinets to be placed back-to-back.

Rather than placing a power supply on each rack, Fujitsu chose to instead implement a central power supply that will feed each individual rack. Also, the backs of the cabinets have been sealed off and large fans and exhaust vents are now located on the top of the server. Like previously mentioned, these racks can be placed back to back allowing for a more efficient use of space in the data center.

According to Fujitsu, these cabinets are up to 20 percent more power efficient than other comparable server systems due to their Cool-Central design which dictates how air flows throughout the cabinet. Essentially this allows the system to separate heat from various components and dictates where fans are placed for optimum air flow. Target markets for the CX1000 are going to primarily revolve around cloud computing providers and hosts, businesses looking to reduce costs by deploying their own cloud servers, Web 2.0 environments, and high-performance computing markets.

The Primergy CX1000 systems will be available from Fujitsu America resellers by the end of march. A fully loaded rack with 38 servers, a single processor per socket, and 16GB of memory will run in the ballpark of $89,000 per rack. Of course, price will increase as more CPUs, hard drives, and memory is added.


Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Saturday, March 13, 2010

Netapp's New Cloud Computing Management Solutions

Faced with today's increased economic pressures, many IT organizations are turning towards cloud computing as a means to help reduce costs and improve efficiencies in their data centers. Service providers play a very important role in this migration to the cloud by helping customers understand these benefits and by delivering a wide range of IT services via the cloud. Last week Netapp unveiled new design guides and capabilities geared specifically towards service providers with the goal of helping them deliver greater value to their cloud customers. Furthermore company officials said their new tools will fulfill the dual role of delivering cloud applications and services to their enterprise clients while also increasing functionality and security for service providers building their own cloud environments.

NetApp Service-Oriented Infrastructure (SOI): The SOI leverages NetApp storage and serves as a standardized and unified infrastructure. This gives service providers the ability to consume and deploy storage, bandwidth, and resources in a repeatable manner which helps speed time to market, improve flexibility, reduce costs, and increase service levels for their customers.

Data Protection as a Service (DPaaS): NetApp now provides a design guide that enables service providers to rapidly and effectively deploy archive and disaster recovery services. This includes NetApp technologies such as FlexClone for improved disaster recovery testing, SnapLock for compliance, and MultiStore for secure multi-tenancy. This DPaaS cloud design guide will help service providers reduce costs and complexities as well as increase flexibility.

Backup/Recovery as a Service (BRaaS): NetApp has teamed with Asigra, a leading provider of cloud backup and recovery software in order to quickly and efficiently deploy BRaaS solutions to providers. The Asigra Cloud Backup software runs on the NetApp SOI, combining to offer a truly scalable and secure backup recovery solution for the cloud.

NetApp Open Management: NetApp's open management capabilities now allow service providers to leverage NetApp's storage capabilities, regardless of whether service providers use NetApp or another IT service's virtualization framework. This enables service providers to link their IT service management and orchestration portals easily to NetApp's storage automation engine for seamless storage and protection services.

"NetApp has a proven track record of successfully teaming with leading service providers to power their cloud service offerings," Patrick Rogers, NetApp's vice president of solutions and alliances, said in a statement. "Our strategy in this space is to enable the success of our solution partners, not compete with them, and through them provide a broad and open set of industry cloud services for enterprise IT customers."

For more reading see: Why Rent A File Server.


Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.

Wednesday, February 24, 2010

Great Refurbished HP 380G4

HP 380G4 Server
If you are looking for a dependable server that would be great for any SMB and can be purchased at a reasonable price, then the refurbished Hewlett-Packard 380G4 - Refurbished Small Business server may be just what you need. This HP comes equipped with 4GB installed/16 GB max - Intel Xeon 3.2 GHz - Hard Drive (3) 72GB SCSI HD’s 10k RPM.

When you compare the cost of a new system, which can be 3 times as much this is a great deal and a great fit for any Tax Service, small medical service provider, insurance offices etc. If you want to learn more about this Refurbished Hp 380G4 you can find it at the Tech-Army Organization




Looking for other Technology Rental information? Visit the Tech Travel Site Map for a variety of computer and technology rental ideas.