Friday, August 10, 2012

NASA Mars Mission Fueled By Amazon Web Services

Curiosity's Mars Mission


Curiosity's Mars Mission
(click image for larger view and for slideshow)
Cloud computing helps many businesses to wrangle and transmit data packets from different parts of the world. Amazon Web Services (AWS) and NASA's Jet Propulsion Laboratory (JPL) have now upped the stakes with a project that manages the flow of information from another part of the solar system--Mars.

The space agency's $2.6 billion Curiosity rover successfully landed on the red plant early on Aug. 6 and immediately began transmitting information back to Earth. These messages, which travel at the speed of light, take 14 minutes--at the planets' present orientation--to speed across the cosmos to waiting scientists. This long-distance transit would be logistically daunting under any circumstances, but NASA faced an even greater burden because of the huge volume of data these transmissions carry.

To address this profound challenge, JPL is using a wide gamut of AWS tools and services, including EC2, S3, SimpleDB, Route 53, CloudFront, the Relational Database Service, Simple Workflow, CloudFormation, and Elastic Load Balancing. This array of services is vital not only to the mission's research objectives but also to public outreach, as images recorded by the rover are made available almost immediately via JPL's Mars Science Laboratory site .

"NASA wanted to ensure that this thrilling experience was shared with fans across the globe by providing up-to-the-minute details of the mission," according to a case study AWS released to illustrate the project's technical accomplishments. With hundreds of thousands of concurrent visitors anticipated during traffic peaks, the case study asserts that "availability, scalability, and performance of the [site] was of the utmost essence." It also says that prior to AWS implementation, NASA/JPL did not possess the requisite Web and live streaming infrastructure to push hundreds of gigabits of content per second to the legions of site users.

"The public gets access as soon as we have access," Khawaja Shams, manager of data services for tactical operations at JPL, said in an interview. "All the images that come from Mars are processed in the cloud environment before they're disseminated." Services from Amazon "allows us to leverage multiple machines to do actual processing."

The processing itself is complex, as the rover captures images using a stereoscopic system that uses two camera lenses. "In order to produce a finished image, each pair (left and right) of images must be warped to compensate for perspective, then stereo matched to each other, stitched together, and then tiled into a larger panorama," Jeff Bar, an AWS evangelist, wrote in a blog post.

Though complicated, this method is vital to the mission's research goals. "One of the big misconceptions about rovers is that they're driven via joystick," Shams stated, "but even at the speed of light, we can't get there." Because of this limitation, a plan must be uploaded into the rover, which then "semi-autonomously takes care of the whole thing." The image acquisition technique allows researchers to generate geographic metadata that serves as a foundation for these plans. "It gives scientists situational awareness to do the best science and keep the rover safe," said Shams.

According to the AWS case study, such awareness maximizes "the time that scientists have to identify potential hazards or areas of particular scientific interest." This enables researchers to send longer command sequences to the rover, thereby increasing "the amount of exploration that the Mars Science Laboratory can perform during any given sol," or Martian day.

JPL's use of AWS technology furthers NASA's reputation as an early and avid adopter of cloud computing. It likewise continues the agency's goal of addressing the general population, an effort that has led to close relationships with Google and Microsoft, among others. In this instance, public engagement culminates in the Mars Research Laboratory site, which is based on the open-source content management system Railo, running on Amazon's EC2.

The Curiosity mission also extends NASA's recent effort to streamline operations and reduce costs by utilizing Amazon services for cloud-based enterprise infrastructure. NASA CIO Linda Cureton detailed the initiative in a June 8 blog post, writing, "This cloud-based model supports a wide variety of Web applications and sites using an interoperable, standards-based, and secure environment while providing almost a million dollars in cost savings each year." In the context of this mission, AWS allows NASA to track Web traffic in real time, and to scale capacity to meet demand. The cloud infrastructure also allows assets to be distributed intelligently across AWS regions depending on the part of world from which requests originate. This functionality produces a secure and stable environment despite the high bandwidth logistics, AWS said. It also can be economical because AWS downsizes activity when traffic is low, avoiding the problem of expensive but under-used resources.

"Science data is growing at an exponential rate. Some upcoming instruments will produce terabytes of data every single day," he said. Such a deluge would have left NASA "out of data center space," making the ability to provision cloud-based machines invaluable. As NASA uses the cloud to solve its own puzzles, opportunities for other applications naturally arise.

"We can provision a supercomputing cluster in the cloud that would qualify as one of the top 500 in the world" at a cost of "a couple hundred dollars an hour," he said. "Think of the possibilities."

Expertise, automation, and silo busting are all required, say early adopters of private clouds. Also in the new, all-digital Private Clouds: Vision Vs. Reality issue of InformationWeek: How to choose between OpenStack and CloudStack for your private cloud. (Free with registration.)


This shows the influence and trust that Cloud Computing is getting.

Posted via email from Larkland Morley's posterous

Friday, August 3, 2012

Cloud Computing: Rackspace Kicks Off the OpenStack Cloud Roll-Out

Despite its reported immaturity, Rackspace has gone production with the Essex version of OpenStack, making it the first large-scale public cloud deployment of the fabled open source platform.

It's positioning Open Cloud as freeing users from vendor lock-in, a taunt directed at Amazon, Google and Microsoft whose customers it expects to run off.

Other OpenStack clouds should follow quickly, say, from HP, Dell and Intel, and since they'll be look-a-likes users able to flit from one to another.

To get the roll-out started Rackspace is offering public, private and hybrid hosting solutions and says there's unlimited availability of Cloud Databases and both Linux and Windows Cloud Servers on OpenStack.

Some mojo called RackConnect will integrate public and private clouds.

What's new in the production release is the compute piece of the Infrastructure-as-a-Service (IaaS) widgetry, which derives from NASA, which contributed its Nova code to the project a couple of years ago, with a pinch of Rackspace's Ozone project. The space agency is no longer supporting the OpenStack effort having fled to Amazon.

Rackspace contributed its Swift storage piece and it's been running it for a few years. Its Open Stack portfolio includes Cloud Files object storage and the Cloud Sites Platform-as-a-Service for .NET, PHP and monitoring.

There's also a new Control Panel that will work with both Rackspace's legacy code and OpenStack. It's supposed to make complex, large-scale cloud deployments as easy as a few mouse clicks. It will also let users tag servers, databases and web sites to identify and organize infrastructure elements; search by name, tag and IP address; filter lists to find a server; use Action Cogs to display contextual menus of most-used actions to complete tasks faster ; and get dynamic feedback for real-time status information about the state of the infrastructure.

Rackspace won't force its 180,000 existing cloud customers to migrate, but the new and improved Control Panel is expected to tempt them to move, a process that could take 12-18 months. Rackspace is expected to produce a tool at some point to egg them on although it says it doesn't want to hurry them. How it prevents them from backsliding to the old cloud is unclear.

Otherwise only new customers will default to OpenStack.

Rackspace says users can launch 200 Cloud Servers in 20 minutes. API performance is supposed to be 25x faster for server create requests. And Rackspace claims its MySQL-based Cloud Databases benchmark at 200% faster (3x) performance than Amazon's MySQL-based Relational Database Service (RDS).

Linux servers with an entry-level 512MB of memory and 20 gigs of disk will start at 2.2 cents an hour or $16.06 a month. A Windows server with 1GB of memory will run eight cents an hour or $58.40 a month.

Rackspace means to add unlimited availability of Cloud Networks and Cloud Block Storage this fall. Cloud Networks has relied on Nicira, which OpenStack rival VMware bought last week for $1.25 billion.

The widgetry Rackspace mounted Wednesday was in beta test for four months. The push to develop the cloud stack is currently supported by 184 companies and a reported 3,300 programmers.

By the way, Rackspace is using Citrix' XenServer hypervisor in OpenStack, meaning it will have to cater to customers who want to use the VMware hypervisor.

Rackspace will roll out OpenStack in Europe in mid-August.

Good exposure for Openstack

Posted via email from Larkland Morley's posterous

Cloud Computing: Lenovo and EMC Now Strategic Partners

In a rare outreach Lenovo has teamed up with EMC, which will get another entry into the vast Chinese market through its new partner.

The pair is going to form an SMB-focused storage joint venture.

They've also got a server technology development program to extend Lenovo's nascent capabilities in the x86 server segment. The servers will be brought to market by Lenovo and embedded into selected EMC storage systems over time. It could threaten HP.

Lenovo is supposed to provide EMC's networked storage solutions to its customers, initially in China and then in other global markets. Both companies are supposed do R&D in servers and storage.

Finally, EMC and Lenovo plan to bring "certain assets and resources" from EMC's Iomega business into a new joint venture that will provide Network Attached Storage (NAS) systems to SMBs and distributed enterprise sites, where EMC is seeing rising demand and use of infrastructure-demanding private clouds.

Lenovo wants to be "a leader in the new PC-plus era." EMC expects to significantly expand its presence in China.

The $30 billion-a-year Lenovo will put cash in the joint venture. EMC will contribute those Iomega assets and resources. Lenovo will have the majority interest - presumably that means 51% - and can probably expect better margins than it's used to from PCs. Lenovo currently ships more PCs than anybody else except HP.

It's expecting to see billions from the partnership and wants to grow its 15% share of the Chinese server market to a position of dominance, spring-boarding it into the global market.

Ostensibly Lenovo is replacing Dell, whose close partnership with EMC fell apart because of Dell's storage acquisitions. Of course Lenovo's SMB markets are sexier than Dell's, especially since Europe is such a downer.

Interesting news

Posted via email from Larkland Morley's posterous