Saturday, December 15, 2012

Is Cloud Computing Killing Open Source Software

Is Cloud Computing Killing Open Source Software

The best thing about open source software systems has always been the fact that it is freely available and any programmer or company can use it to develop its own version of that software. For the longest time they have been the best solution for people willing to go outside the box in order to get the best results in their respective IT departments. Of course these systems have never been without profit and it came from two sources that are now getting to be absolute because of the emergence of cloud computing and the level of affordability most of its components come from.

The way open source software systems have worked so far has been through selling license agreements. Any company could take a software system like MySQL incorporate it in their own product and then they would either have the choice of getting an open source license or buy a commercial license from MySQL, in this case.

However because of the cloud is not actually selling software systems but only time on those systems companies like Amazon, who has developed their Amazon RDS based on MySQL do not have to pay them any licensee fee. The end users get exactly what they needed and are willing to pay for it and cloud service providers like Amazon do not need to pay any fee in licensing.

There is also a second stream of income for big open source companies because it is their software that is modified and sold further on. The process creates a need for specialists and those specialists are delivered by the initial company like MySQL or Red Hat. But if the company that has used the software generates enough revenue from it, they can afford to hire their own specialists. And since their products are not sold further as such but are only accessed by third parties on their own server there is nobody else left who would need those services.

However the world of open source software does not end with MySQL and even they have alternative sources of funding. For one, even the specialists hired by Amazon need to be trained, tested and licensed by a valid authority which will always need to route back to Oracle who currently owns MySQL. And the same is true for any open source software.

Also the entire Linux platform is what currently supports and Android software and as long as that exists there will be little chance for the actual concept of the open source software to go out of date. Even the Android system itself is an open source software system that many companies like CyanogenMod have taken to using and further developing.

So ultimately the cloud cannot take out the open source concept because it is built itself on open source platforms. The game has gotten tougher for many open source companies but they are already fighting back by putting in place new licensing systems like the Affero GPL license.

By Luchi Gabriel Manescu

(Disclaimer: CloudTweaks publishes news and opinion articles from different contributors. All views and opinions in these articles belong entirely to our contributors. They do not reflect or represent in any way the personal or professional opinions of CloudTweaks.com or those of its staff.)

Tagged as: Amazon, android, cloud service providers, IT department, license agreements, Open Source, open source companies, open source license, Programmer, Software, software system, SQL, the cloud

This is a very interesting discussing as if you can get all the services in a cloud why bother with open source software? you can just leverage what is already in the could without worrying about maintenance and development work.

Posted via email from Larkland Morley's posterous

Sunday, December 2, 2012

The Northbound API is the key to OpenFlow’s Success

David Lenrow says:

The value of the SDN architectural approach (Which is what SDN is, it isn’t a network solution and doesn’t do anything in and of itself, but rather lends itself to building solutions with global network view and more abstracted APIs than the device or flow table model) and controllers with their associated NBI, is that it completely abstracts the details of what southbound API is used to talk to the network devices. A controller based on the SDN architectural approach may or may not speak OpenFlow and the answer to that question is a solid “don’t care” from the Orchestration and Cloud OS layer talking to the NBI in the cloud stack. The power of SDN is that a controller can expose a network abstraction and the details of the device level implementation are completely hidden. I completely agree that developing, sharing, and eventually standardizing NBI is important and has the ability to be a game changer, but this is completely orthogonal to whether OpenFlow is the only, or even a good, south bound protocol to control some or all of the forwarding behaviors in a network. The ONF initially made the horrible mistake of positioning SDN as the tail and OpenFlow as the dog when they launched. Now that the interesting conversation in the industry is about the NBI, the ONF is at risk of becoming even more irrelevant in future because they don’t appear to understand that the NBI is the key to integrating virtual networking with the on-its-way-to-ubiquity cloud movement. The most innovative and important data center SDN solutions are being build without the not-yet-ready-to-control-anything-but-the-forwarding table OpenFlow protocol and the ONF needs to have jurisdiction over the interesting decisions for the industry or become super-irrelevant as the flow-table-wire-protocol foundation. NBI is really important, but that has almost nothing to do with OpenFlow and whether it will ever be a comprehensive protocol for controlling network devices.

I have always believe this to be the only value of sdn as a whole. Meaning the ability to configure complex protocols on multiple devices. Openflow as it is today provides the transport but you will need a lot more implementation at the controller level to make this attractive long term

Posted via email from Larkland Morley's posterous

Breaking News: SDN Consolidation Continues, Cisco to Acquire Cariden for $141M

Rating: 0.0/5 (0 votes cast)

Cisco to acquire Cariden

This morning Cisco accounted plans to acquire Cariden to enhance it’s Service Provider software-defined networking (SDN) solutions.  Surprisingly, this hasn’t been positioned as an SDN play — though from our direct experience with the company — they fit the definition of SDN and have real customers and revenue to prove it.

Equally impressive, Cariden built the company through blood, sweat and tears, bypassing traditional venture capital financing — providing budding entrepreneurs and folks considering joining an SDN startup inspiration to think differently.

I worked with Shailesh Shukla, the executive responsible for the Cariden acquisition back at Juniper Networks — he’s a smart executive — and made a great purchase.

Cariden is an example of a network application that can drive adoption of SDN technologies.  For example, as Big Switch announced during their product launch, Cariden is integrated with Floodlight.

We believe this is the start of the Cisco acquiring network applications and can eventually be integrated with CiscoONE.  Expect to see more networking application acquisitions from Cisco in the near future.  We also expect to see a continued shift to Cisco and others increasingly acquiring software companies who’ve bypassed traditional venture capital financing.

Congrats Arman and team!

Check out SDNCentral’s other Cariden Coverage:

Cisco Press Release Below.

Checkout more SDN company coverage on SDNCentral:


Cisco Announces Intent to Acquire Cariden
Acquisition Further Strengthens Cisco’s Ability to Lead the Evolution in Service Provider Networking
SAN JOSE, Calif. – Nov. 29, 2012 – Cisco today announced its intent to acquire privately held Cariden Technologies, Inc., a Sunnyvale, Calif.-based supplier of network planning, design and traffic management solutions for telecommunications service providers. With global service providers converging their Internet Protocol (IP) and optical networks to address exploding Internet and mobile traffic growth and complex traffic patterns, Cisco’s acquisition of Cariden will allow providers to enhance the visibility, programmability and efficiency of their converged networks, while improving service velocity.

Cariden’s industry-leading capacity planning and management tools for IP/MPLS (Multi-Protocol Label Switching) networks, which have been deployed by many of the world’s leading fixed and mobile network operators, will be integrated into Cisco’s Service Provider Networking Group to enable multilayer modeling and optimization of optical transport and IP/MPLS networks. Cariden’s products and technology will advance Cisco’s nLight technology for IP and optical convergence. The acquisition also supports the company’s Open Network Environment (ONE) strategy by providing sophisticated wide area networking (WAN) orchestration capabilities. These capabilities will allow service providers to improve both the programmability of their networks and the utilization of existing network assets across the IP and optical transport layers.

“The Cariden acquisition reinforces Cisco’s commitment to offering service providers the technologies they need to optimize and monetize their networks, and ultimately grow their businesses,” said Surya Panditi, senior vice president and general manager, Cisco’s Service Provider Networking Group. “Given the widespread convergence of IP and optical networks, Cariden’s technology will help carriers more efficiently manage bandwidth, network traffic and intelligence. This acquisition signals the next phase in Cisco’s packet and optical convergence strategy and further strengthens our ability to lead this market transition in networking.”

The acquisition of Cariden exemplifies Cisco’s build, buy, and partner innovation framework and is aligned to Cisco’s strategic goals to develop and deliver innovative networking technologies and provide best-in-class solutions for customers, all while attracting and cultivating top talent.

Upon the close of the acquisition, Cariden employees will be integrated into Cisco’s Service Provider Networking Group, reporting to Shailesh Shukla, vice president and general manager of the company’s Software and Applications Group. Under the terms of the agreement, Cisco will pay approximately $141 million in cash and retention-based incentives in exchange for all shares of Cariden. The acquisition is subject to various standard closing conditions and is expected to be completed in the second quarter of Cisco’s fiscal year 2013.

About Cisco

Cisco (NASDAQ: CSCO) is the worldwide leader in networking that transforms how people connect, communicate and collaborate. Information about Cisco can be found at http://www.cisco.com. For ongoing news, please go to http://newsroom.cisco.com.

About the Author

.

Matt has 20+ years of software-defined networking (SDN), cloud computing, SaaS, & computer networking… More

Another twist to the SDN story..

Posted via email from Larkland Morley's posterous

Cloud Computing and Big Data Intersect at NIST, January 15-17

Two major new technologies come together for the Cloud Computing and Big Data Workshop, hosted by the National Institute of Standards and Technology (NIST) at its Gaithersburg, Md., campus Jan. 15-17, 2013.

nebula N76
Combining cloud computing and big data could hasten valuable scientific discoveries in many areas including astronomy. (NASA image of nebula N76 in a bright, star-forming region of the Small Magellanic Cloud.)
Credit: NASA

Cloud computing* offers an on-demand access to a shared pool of configurable resources; big data explores large and complex pools of information and requires novel approaches to meet the associated computing and storage requirements. The workshop will focus on the intersection of the two—the meeting is part of the traditional semi-annual cloud computing forum and workshop series with the additional dimension of big data and its relation with and influence on cloud platforms and cloud computing.

"Cloud computing and big data are each powerful trends. Together they can be even more powerful and that's why we're hosting this workshop," said Chuck Romine, director of the NIST Information Technology Laboratory. "The cloud can make big data accessible to those who can't take advantage today. In turn, big data opens doors to discovery, innovation, and entrepreneurship that are inaccessible at conventional data scales."

The January conference will bring together leaders and innovators from industry, academia and government in an interactive format that combines keynote presentations, panel discussions, interactive breakout sessions and open discussion. Patrick Gallagher, Under Secretary of Commerce for Standards and Technology and NIST director, and Steven VanRoekel, the Chief Information Officer of the United States, will open the conference.

The first day's morning panels examine the convergence of cloud and big data, progress on the U.S. Government Cloud Computing Roadmap and international cloud computing standards.

Two afternoon sessions focus on progress made on the Priority Action Plans (PAP)s associated with the 10 requirements described in the first release of the USG Cloud Computing Technology Roadmap, Volume I (NIST SP 500-293).** Each requirement has associated PAPs related to interoperability, portability and security. The meetings will showcase the voluntary, independent, cloud-related efforts on diverse PAPs underway by industry, academia and standards-developing organizations.

The second day of the workshop explores the unprecedented challenges posed by big data on storage, integration, analysis and visualization—demands that many cloud innovators are working to meet today. The workshop will explore possibilities for harmonizing cloud and big data measurement, benchmarking and standards in ways that bring the power of these two approaches together to facilitate innovation. Day three offers workshops on exploring the formation of new working groups at the intersection of cloud and big data, kicking off a Big Data Research Roadmap, discussing international cloud computing standards progress, and hearing the status of the USG Cloud Computing Technology Roadmap Volume III. Special topic briefings will be offered during lunch times.

For more information on the meeting or to register, go to www.nist.gov/itl/cloud/cloudbdworkshop.cfm.

* For the NIST definition of cloud computing, see http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf
** USG Cloud Computing Technology Roadmap, Volume I (NIST SP 500-293) is available at www.nist.gov/itl/cloud/upload/SP_500_293_volumeI-2.pdf

Interest read about Cloud Computing and Big Data. They are two of the most interesting fields now in computing.

Posted via email from Larkland Morley's posterous

Saturday, November 10, 2012

Gartner: Mobile Development, Social Media and Cloud Computing Disrupting IT

Gartner logo 300x68 Gartner: Mobile Development, Social Media and Cloud Computing Disrupting IT

In a conference in Orlando, Florida, Gartner Inc. revealed that the central focus of IT consisting of social media innovations, mobile devices, web information, and cloud computing can disrupt the whole IT environment. Addressing at least 10,000 participants, Gartner Vice President David Cearley said that at the rate things are going the mobile experience is overshadowing the desktop experience. Cloud computing, together with mobile devices, is set to alter the modern corporation’s primary architecture of computing. Instead of focusing on client-server, IT shops must now set their sights on cloud-client architecture.

With this new type of architecture, it is also possible for skill sets necessary for enterprise software development to be altered significantly. The front-end interface must have better designs and development teams must gear towards HTML5 Web browser opportunities aside from the usual mobile device operating systems. Cearley also claimed that consumers have fresh expectations. As such, application developers and architects must obtain new design skills to meet these new expectations.

According to Kii Inc. Senior VP for platform marketing and developer relations Miko Matsumura, the result of mobile development has caused traditional architecture to evolve and a new breed of developers has turned their mobile perspectives to the cloud. According to him, the client cloud is not something different from a programming platform, programming language, or programming model. On the other hand, Gartner VP for research Jim Duggan said that the alterations in application lifecycles and development are signs that by 2015 mobile applications will be greater than static deployment by 400%. This means that focus should be on developers training as well as outsourcing.

According to Gartner analysts, there will come a time when each corporate budget will be an IT budget and that businesses will have a Chief Digital Officer in their payroll. Gartner further predicts that by 2015, around 25% of businesses will have Chief Digital Officers.

Cloud security is also expected to triple in size. This is primarily because of regulatory compliance. According to Gartner analysts, IT leaders must be able to plan for the upcoming government regulations and interventions. Towards the end of 2015, Gartner expects bigger service providers to acquire cloud-based identity access management solutions. The analyst group also believes that administrative error or user management will comprise about 80% of cloud security occurrences in 2013. Those businesses which require basic security environments can rely on the security provided by the public cloud service or structure. Gartner also expects that 60% of large firms will limit network access connectivity of mobile devices personally-owned by their staff.

Posted via email from Larkland Morley's posterous

Cloud Computing Gains Continue

The US cloud computing market has grown quickly for widespread usage. Up from 58% last year, about 78% of medium and large enterprises already use or are testing a cloud solution. On average, cloud as a percentage of enterprise IT stands at 4.4%. Use of cloud in enterprise IT is a mile-wide, an inch deep, and growing fast.

New research from WaveLength Market Analytics and Winn Technology Group, The Continuing Enterprise Cloud Computing Evolution, shows that 2012 saw the emergence of a new segment, the multicloud users. About 19% of the market is comprised of them, called Cloud Pros. Other segments are Cloud Pioneers (59%) that actively use or pilot a cloud, Cloud Planners (12%) with cloud plans, and Stragglers (10%) with no plans.

"The enterprise cloud market and segments have quickly evolved; today's meaningful question is no longer if cloud is used but rather how much," said Natalie Robb, of WaveLength Market Analytics. "Last year, cloud users said they expected 28% of IT to be cloud-based by 2015 and now they expect around 35%. Knowing what sets Cloud Pros and Pioneers apart is crucial for technology and telecom firms to advance technologies and reach buyers."

Other key findings from WaveLength/Winn's report include:

Pros and Pioneers use multiple data centers, but nearly all Pros use AWS, while Pioneers are more likely to use IBM, Verizon, and Rackspace.

To prepare for cloud, Pros invested in network performance improvements while Pioneers invested in storage and security.

With 48% of all cloud users, human resources apps surpassed CRM and email as the most common enterprise application in the cloud.

Biggest gain in enterprise and infrastructure cloud service usage is desktop apps, which grew from 6% last year to nearly 26%, and back-up and disaster recovery, which surged from 17% to 38%.

The Continuing Enterprise Cloud Computing Evolution discusses broad trends in the changing cloud computing market. It examines penetration of different service deployment models, projects to prepare for deployment, and cloud enterprise application adoption.

The Continuing Enterprise Cloud Computing Evolution is a joint effort: Winn Technology Group collected the data and WaveLength conducted the analysis. Two more reports on the enterprise cloud market segments will be released in the coming weeks.

This is very interesting data to review for investment in cloud computing

Posted via email from Larkland Morley's posterous

Wednesday, October 17, 2012

Cisco Execs Plumb The Limits Of Cloud Computing

Cloud computing has become the all-purpose buzzword of business computing -- it can mean pretty much whatever you want it to mean, but every product better have some cloud in it. Networking giant Cisco has totally bought in to the concept, but a couple of top execs also described what they see as limits on how far pure cloud computing will spread.

In a drab conference room out by Oakland Airport (the company's planned Zeppelin excursion to highlight its cloud product launch was scrubbed by bad weather) Cisco's Murali Sitaram (VP/GM Cloud Collaboration Applications) and Lew Tucker (VP, CTO Cloud Computing) explained the company's approach as they introduced additions to Cisco's Cloudverse family.

The Plumbing Behind The Cloud

Instead of just offering its own products on a hosted basis, Cisco's approach is to work with telecom carriers, large enterprises and resellers to help them offer collaboration-and-communication-as-a-service.

The idea, Sitaram said, is to leverage Cisco's partners to provide services without having to become a carrier itself -- which is a daunting, heavily regulated proposition in many parts of the world. "We don't want to be in the carrier business, but we do want to provide services through partners."

Those services include expanding Cisco’s Hosted Collaboration Solution to include TelePresence, Customer Collaboration (contact centers), unified communications and mobility. It also means letting large customers install the company's WebEx online Web conferencing solution in their own data centers.

That may not gibe with most people's definition of cloud computing, but according to Sitaram, many customers still demand more control over their services, either because they're in a highly sensitive industry like the military, health care or financial services, or because they're in emerging markets with restrictive regulations and unreliable public infrastructure.

"It's not easy to deliver cloud-based services" to countries like China, India, Russia and South America, Sitaram said, "especially from the United States." Besides, "the cloud isn't just Facebook and Salesforce," Sitaram added. "If you peel the onion, there are just so many nuances."

When Is The Cloud Not The Cloud?

Nuances or not, earlier this month, I noted that Oracle's Larry Ellison Has Some Strange Ideas About Cloud Computing. Cisco's use of "cloud computing" in this context reminded me of Ellison's Oracle Private Cloud oxymoron, but Sitaram said the Private Cloud version of WebEx retained the service's "quasi-multi-tenant" cloud-based architecture and still offered end-users a subscription based experience. Well, if he says so, but putting your stuff in the customer's data center still ain't what I call "cloud."

Ironically, that may be the point. "Some countries and businesses, they will never go to [Software-as-a-Service]-based clouds," Sataram said.

So how far will cloud computing go? "There's going to be a world of many clouds," Tucker predicted. Most things will go into the cloud, but many may not. "Many companies want to be their own cloud providers to their workers - but using a cloud model with self-service and pay-as-you-go pricing… the consumer could be an employee."

When I tried to pin down Tucker on exactly how far he thought cloud adoption would go, Tucker guessed 60% cloud, 40% on premeses. That seems low to me. After all, once the utilities started providing cheap, reliable power, how many customers still wanted to generate their own electricity?

 

Photos of Sitaram and Tucker by Fredric Paul.

Posted via email from Larkland Morley's posterous

The Open Source Cloud is Ready for Hadoop, Projects Say

Two major trends in enterprise computing this year show increasing overlap: big data processing and open source cloud adoption. 

To Hortonworks, the software company behind open source Apache Hadoop, the connection makes sense. Enterprise customers want the ability to spin up a cluster on demand and quickly process massive amounts of data, said Jim Walker, director of product marketing at Hortonworks, in an interview at OSCON in July. The cloud provides this kind of access by its ability to scale and handle computing tasks elastically.

The open source cloud offers the additional benefit of low-cost deployment and extra plugability you won’t get with a proprietary cloud infrastructure.

All three major open source IaaS platforms -- OpenStack, CloudStack and Eucalyptus -- have made much progress this year in testing Hadoop deployments on their stacks. And Eucalyptus is working on full integration with the platform.

Somik Behera is a founding core developer of the OpenStack Quantum project at Nicira, which has since been acquired by VMware.Although no formal relationship exists between Hadoop and the open source IaaS platforms now, Hortonworks does see potential for collaboration given the nature of cloud computing, in general, Walker said.

“(Hadoop) could be a piece of any open cloud platform today,” he said.

Here’s what each of the three major platforms had to say recently about their progress with Hadoop on the open cloud.

OpenStack

In the past, deploying Hadoop in a cloud datacenter proved too challenging for business-critical applications, said Somik Behera, a founding core developer of the OpenStack Quantum project at Nicira, which has since been acquired by VMware. Big data applications require a guaranteed bandwidth, which was difficult to do, Behera said.

OpenStack’s Quantum networking project, which was recently integrated in the new Folsom release, offers an Open vSwitch pluggable networking patch to help ensure performance on Hadoop deployments, Behera said. His Quora post on the topic explains it best:

Read Quote of Somik Behera's answer to Apache Hadoop: Has anyone tried to deploy an Apache Hadoop cluster on OpenStack? on Quora

CloudStack

The biggest challenge for deploying Hadoop on CloudStack has been allocation of resources, said Caleb Call, manager of website systems at Openstock.com and a CloudStack contributor, via email.

“In order to crunch the data we need to in our Hadoop cluster, we currently have many bare metal boxes,” Call said.  “Reproducing this same model in the cloud, even being a private cloud, has proven to be tough.”

Though CloudStack is not currently working on an Hadoop integration, the team has built its cloud environment to guarantee performance for Hadoop workloads by building a dedicated resource pool, said Call, who oversees a team of engineers on the CloudStack project’s “Move to the Cloud” initiative.

“We've also built and tuned our compute templates around Hadoop for this cluster so we don't have to throw large amounts of computing power at the problems,” Call said. “Same as you would do for a bare metal system, but now the saved resources are still left in our compute resource pool available to be used by other Hadoop processes.”

Eucalyptus

At Eucalyptus, performance challenges with Hadoop in the cloud have been largely overcome in the past year, said Andy Knosp, VP of Product at the company.

Andy Knosp, VP of Product, Eucalyptus.“There’s been some good research that’s shown near-native performance of Hadoop workloads in a virtualized environment,” Knosp said. This has made Hadoop “a perfect use case” for the open cloud.

Amazon Web Services currently offers the Elastic MapReduce (EMR) service, a hosted Hadoop framework that runs on EC2 and S3. Through the company’s partnership with AWS, Eucalyptus is developing a similar offering that will provide simplified deployment of Hadoop on Eucalyptus.

Customers can run Hadoop on the Eucalyptus private cloud platform as-is – no plugins required, Knosp said. But the company also has a team working on integrating Hadoop with the platform for simplified deployment.

“We want to make it as simple as possible for our community and partners to deploy,” Knosp said. “It improves time to market for Hadoop applications.”

 

Posted via email from Larkland Morley's posterous

MIT/Stanford Venture Lab (VLAB): The Revolution of Software Defined Networks

Rating: 0.0/5 (0 votes cast)

VLAB - Revolution of Software-Defined Networks

Earlier tonight  (October 16th, 2012), I had the honor of moderating the panel:  The Revolution of Software Defined Networks hosted by The MIT/Stanford Venture Lab (VLAB) with a fantastic set of panelists including:

  • Michael Beesley, Chief Technology Officer, Platform Systems Division at Juniper Networks
  • Kelly Herrell, Chief Executive Officer at Vyatta
  • Awais Nemat, Chief Executive Officer at PLUMgrid
  • Jake Flomenberg, Partner at Accel Partners
With their permission, SDNCentral is making available the slides from the event below.

Register to download the slides:

Event Description:

On July 23, 2012, VMware bought Nicira for $1.26B validating this revolution in Networking.Next week, Oracle acquired Xsigo. Just recently Cisco acquired vCider. Upstarts claim they will commoditize the expensive networking gear sold by the incumbents, with standards like Software Defined Networking (SDN), and OpenFlow (OF). Already, Google and Facebook deploy their own network hardware and software – and not the proprietary offerings of incumbent networking players. Many entrepreneurs are betting on SDN and OpenFlow. VLAB engages a robust discussion on SDN.

  • Are we ready for chasm crossing?
  • Is Cloud Computing driving SDN?
  • Who else is using SDN and why?
  • Is SDN a tectonic technology shift, or just a niche?
  • Will incumbents co-opt SDN with closed proprietary implementations?
  • Are we to have a win-win between users and vendors?
  • Where are the opportunities?

About the Author

.

Matt has 20+ years of software-defined networking (SDN), cloud computing, SaaS, & computer networking… More

Very useful information on the latest in SDN

Posted via email from Larkland Morley's posterous

Wednesday, September 5, 2012

12 hot cloud computing companies worth watching

While big-name players such as Amazon, Google, IBM, Verizon and VMware sit atop the burgeoning cloud computing market, an entire ecosystem of early stage startups are looking to stake their claim, too.

And why not? As Ignition Partners' Frank Artale sees it, enterprises are on the precipice of the next major shift in computing and venture capital firms are "very aggressive" in looking for companies that can help customers ease their transition to the cloud.

"Initially this move will create more complexity," he says. "Companies that can enable the use of cloud, virtual networking and storage will gets lots of attention."

IN PICTURES: Cloud companies to watch: A product sampler

Related Content

LAST YEAR'S LIST: 7 hot cloud companies to watch

ALL HYPE? Gartner's most over-hyped terms in cloud computing

Our list of a dozen such cloud computing upstarts, hailing from locations as far apart as Silicon Valley and Israel, includes those leveraging mobile devices for worker productivity, integrating software-defined networking and provisioning and monitoring cloud-based services. These companies -- many of which have been able to get up and running by taking advantage of cloud services themselves -- have attracted some $161 million in funding (one snared a $60 million round by itself) and are hungry for more as they look to grow their businesses.

CloudOn

Cloudon

Focus: Optimization of Microsoft Office apps for mobile devices 
Founded: 2009 
Location: Palo Alto, Calif., with offices in Herzliya, Israel 
Management: Former Cisco employees Milind Gadekar (CloudOn CEO) and Meir Morgenstern (CloudOn VP of engineering/operations) 
Funding: $26 million from Foundation Capital, Embarcadero Ventures, Rembrandt Venture Partners and Translink Capital 
Product availability: Free download available on Apple, Android platforms now

Why it's worth watching: Ask Milind Gadekar, and he'll tell you that the workforce of the future will rely even more heavily on mobile devices. But for many workers, the most popular applications they use at their jobs are not optimized to work on mobile devices. That's where CloudOn comes in.

The folks at CloudOn are aiming to make that mobile workforce more productive with their free app that's in public beta. The company specializes in optimizing Microsoft Office for use on phones and tablets across a range of mobile operating systems, including iOS and Android, all using a cloud-based service.

Cisco purchased Gadekar's first startup, named P-Cube, which focused on network optimization for service providers, for $200 million in 2004. After heading up product marketing for the firm, Gadekar left the company three years ago to explore mobile optimization opportunities. That's when he founded CloudOn with Meir Morgenstern, who led the technical side of P-Cube and now serves as VP of engineering for CloudOn. Within a year of founding CloudOn, Gadekar says the best thing that could have happened to the company did: Apple released its first iPad.

With the release of the tablet, employees started bringing their iPads to work, looking to get access to email and their applications. "This was the exact problem we were trying to solve," Gadekar says. In January 2011, CloudOn launched a free version of its app, available in the Apple App Store. Within 12 hours it was the No. 1 app in the entire app marketplace, not just in the productivity category where the company placed it. "Since then, it's been a complete whirlwind," Gadekar says. CloudOn has launched in 80 countries and in 70 of those it became the top downloaded app within 24 hours of launch. The app is now available on Android devices and in just over seven months it's been downloaded 1.8 million times. "People are clearly looking for ways to be more productive, to enhance their mobile experience and to have a way to be mobile-centric," Gadekar says.

CloudOn powers its application using proprietary software developed for optimizing Microsoft Office for use on a gesture-controlled mobile device. On the back end, it leverages file sharing services DropBox, Google Drive and Box, while hosting the software as a service (SaaS)-based application in the Amazon Web Services cloud. The success has fueled the company's further development. Having raised $26 million through two rounds of funding, the company is aiming to start monetizing the product early next year.

DeepField

Deepfield

Focus: Cloud and network mapping and performance benchmarking 
Founded: 2011 
Location: Ann Arbor, Mich. 
Management: CEO Craig Labovitz, previously chief scientist/chief architect at Arbor Networks 
Funding: $1.5 million in seed funding from DFJ Mercury and RPM Ventures
Product availability: Public beta 

Why it's worth watching: Just how well do you know your cloud?

Do you know all of the service providers in the supply chain that make up your cloud service? If you're a service provider, do you know exactly what's going on in your network? DeepField claims it has the answers.

Founded in the fall of 2011 by network security experts who specialized in DDoS protections, DeepField gives customers a deep analysis of what the company calls the cloud genome. It's the exact makeup of a cloud infrastructure and the various vendors and users on the network.

DeepField installs virtual machines on the network to conduct a range of analytical functions. "This allows anyone with a large network or compute infrastructure to get a clear handle on exactly what's happening in their network," says DeepField Chief Data Scientist Naim Falandino. DeepField officials are releasing scant details of how the system works because of a patent pending on the back-end technology, but Falandino says it has the ability to conduct real-time monitoring and mapping.

The company's product is currently in public beta, but DeepField is ramping up for its general availability this fall.

As Network World's Carolyn Duffy Marsan explained in a recent profile, mapping a cloud's architecture can help network operators better understand their cloud services, more easily launch new services and improve system performance.

DeepField engineers have already used their data to yield some interesting findings. In April, for example, co-founder Craig Labovitz described how he used DeepField technology to monitor weeks of network data from several million Internet end users to find that nearly one-third of all Internet traffic is somehow connected to Amazon Web Services infrastructure.

Great to see who are the emerging players in the cloud space

Posted via email from Larkland Morley's posterous

Friday, August 10, 2012

NASA Mars Mission Fueled By Amazon Web Services

Curiosity's Mars Mission


Curiosity's Mars Mission
(click image for larger view and for slideshow)
Cloud computing helps many businesses to wrangle and transmit data packets from different parts of the world. Amazon Web Services (AWS) and NASA's Jet Propulsion Laboratory (JPL) have now upped the stakes with a project that manages the flow of information from another part of the solar system--Mars.

The space agency's $2.6 billion Curiosity rover successfully landed on the red plant early on Aug. 6 and immediately began transmitting information back to Earth. These messages, which travel at the speed of light, take 14 minutes--at the planets' present orientation--to speed across the cosmos to waiting scientists. This long-distance transit would be logistically daunting under any circumstances, but NASA faced an even greater burden because of the huge volume of data these transmissions carry.

To address this profound challenge, JPL is using a wide gamut of AWS tools and services, including EC2, S3, SimpleDB, Route 53, CloudFront, the Relational Database Service, Simple Workflow, CloudFormation, and Elastic Load Balancing. This array of services is vital not only to the mission's research objectives but also to public outreach, as images recorded by the rover are made available almost immediately via JPL's Mars Science Laboratory site .

"NASA wanted to ensure that this thrilling experience was shared with fans across the globe by providing up-to-the-minute details of the mission," according to a case study AWS released to illustrate the project's technical accomplishments. With hundreds of thousands of concurrent visitors anticipated during traffic peaks, the case study asserts that "availability, scalability, and performance of the [site] was of the utmost essence." It also says that prior to AWS implementation, NASA/JPL did not possess the requisite Web and live streaming infrastructure to push hundreds of gigabits of content per second to the legions of site users.

"The public gets access as soon as we have access," Khawaja Shams, manager of data services for tactical operations at JPL, said in an interview. "All the images that come from Mars are processed in the cloud environment before they're disseminated." Services from Amazon "allows us to leverage multiple machines to do actual processing."

The processing itself is complex, as the rover captures images using a stereoscopic system that uses two camera lenses. "In order to produce a finished image, each pair (left and right) of images must be warped to compensate for perspective, then stereo matched to each other, stitched together, and then tiled into a larger panorama," Jeff Bar, an AWS evangelist, wrote in a blog post.

Though complicated, this method is vital to the mission's research goals. "One of the big misconceptions about rovers is that they're driven via joystick," Shams stated, "but even at the speed of light, we can't get there." Because of this limitation, a plan must be uploaded into the rover, which then "semi-autonomously takes care of the whole thing." The image acquisition technique allows researchers to generate geographic metadata that serves as a foundation for these plans. "It gives scientists situational awareness to do the best science and keep the rover safe," said Shams.

According to the AWS case study, such awareness maximizes "the time that scientists have to identify potential hazards or areas of particular scientific interest." This enables researchers to send longer command sequences to the rover, thereby increasing "the amount of exploration that the Mars Science Laboratory can perform during any given sol," or Martian day.

JPL's use of AWS technology furthers NASA's reputation as an early and avid adopter of cloud computing. It likewise continues the agency's goal of addressing the general population, an effort that has led to close relationships with Google and Microsoft, among others. In this instance, public engagement culminates in the Mars Research Laboratory site, which is based on the open-source content management system Railo, running on Amazon's EC2.

The Curiosity mission also extends NASA's recent effort to streamline operations and reduce costs by utilizing Amazon services for cloud-based enterprise infrastructure. NASA CIO Linda Cureton detailed the initiative in a June 8 blog post, writing, "This cloud-based model supports a wide variety of Web applications and sites using an interoperable, standards-based, and secure environment while providing almost a million dollars in cost savings each year." In the context of this mission, AWS allows NASA to track Web traffic in real time, and to scale capacity to meet demand. The cloud infrastructure also allows assets to be distributed intelligently across AWS regions depending on the part of world from which requests originate. This functionality produces a secure and stable environment despite the high bandwidth logistics, AWS said. It also can be economical because AWS downsizes activity when traffic is low, avoiding the problem of expensive but under-used resources.

"Science data is growing at an exponential rate. Some upcoming instruments will produce terabytes of data every single day," he said. Such a deluge would have left NASA "out of data center space," making the ability to provision cloud-based machines invaluable. As NASA uses the cloud to solve its own puzzles, opportunities for other applications naturally arise.

"We can provision a supercomputing cluster in the cloud that would qualify as one of the top 500 in the world" at a cost of "a couple hundred dollars an hour," he said. "Think of the possibilities."

Expertise, automation, and silo busting are all required, say early adopters of private clouds. Also in the new, all-digital Private Clouds: Vision Vs. Reality issue of InformationWeek: How to choose between OpenStack and CloudStack for your private cloud. (Free with registration.)


This shows the influence and trust that Cloud Computing is getting.

Posted via email from Larkland Morley's posterous

Friday, August 3, 2012

Cloud Computing: Rackspace Kicks Off the OpenStack Cloud Roll-Out

Despite its reported immaturity, Rackspace has gone production with the Essex version of OpenStack, making it the first large-scale public cloud deployment of the fabled open source platform.

It's positioning Open Cloud as freeing users from vendor lock-in, a taunt directed at Amazon, Google and Microsoft whose customers it expects to run off.

Other OpenStack clouds should follow quickly, say, from HP, Dell and Intel, and since they'll be look-a-likes users able to flit from one to another.

To get the roll-out started Rackspace is offering public, private and hybrid hosting solutions and says there's unlimited availability of Cloud Databases and both Linux and Windows Cloud Servers on OpenStack.

Some mojo called RackConnect will integrate public and private clouds.

What's new in the production release is the compute piece of the Infrastructure-as-a-Service (IaaS) widgetry, which derives from NASA, which contributed its Nova code to the project a couple of years ago, with a pinch of Rackspace's Ozone project. The space agency is no longer supporting the OpenStack effort having fled to Amazon.

Rackspace contributed its Swift storage piece and it's been running it for a few years. Its Open Stack portfolio includes Cloud Files object storage and the Cloud Sites Platform-as-a-Service for .NET, PHP and monitoring.

There's also a new Control Panel that will work with both Rackspace's legacy code and OpenStack. It's supposed to make complex, large-scale cloud deployments as easy as a few mouse clicks. It will also let users tag servers, databases and web sites to identify and organize infrastructure elements; search by name, tag and IP address; filter lists to find a server; use Action Cogs to display contextual menus of most-used actions to complete tasks faster ; and get dynamic feedback for real-time status information about the state of the infrastructure.

Rackspace won't force its 180,000 existing cloud customers to migrate, but the new and improved Control Panel is expected to tempt them to move, a process that could take 12-18 months. Rackspace is expected to produce a tool at some point to egg them on although it says it doesn't want to hurry them. How it prevents them from backsliding to the old cloud is unclear.

Otherwise only new customers will default to OpenStack.

Rackspace says users can launch 200 Cloud Servers in 20 minutes. API performance is supposed to be 25x faster for server create requests. And Rackspace claims its MySQL-based Cloud Databases benchmark at 200% faster (3x) performance than Amazon's MySQL-based Relational Database Service (RDS).

Linux servers with an entry-level 512MB of memory and 20 gigs of disk will start at 2.2 cents an hour or $16.06 a month. A Windows server with 1GB of memory will run eight cents an hour or $58.40 a month.

Rackspace means to add unlimited availability of Cloud Networks and Cloud Block Storage this fall. Cloud Networks has relied on Nicira, which OpenStack rival VMware bought last week for $1.25 billion.

The widgetry Rackspace mounted Wednesday was in beta test for four months. The push to develop the cloud stack is currently supported by 184 companies and a reported 3,300 programmers.

By the way, Rackspace is using Citrix' XenServer hypervisor in OpenStack, meaning it will have to cater to customers who want to use the VMware hypervisor.

Rackspace will roll out OpenStack in Europe in mid-August.

Good exposure for Openstack

Posted via email from Larkland Morley's posterous

Cloud Computing: Lenovo and EMC Now Strategic Partners

In a rare outreach Lenovo has teamed up with EMC, which will get another entry into the vast Chinese market through its new partner.

The pair is going to form an SMB-focused storage joint venture.

They've also got a server technology development program to extend Lenovo's nascent capabilities in the x86 server segment. The servers will be brought to market by Lenovo and embedded into selected EMC storage systems over time. It could threaten HP.

Lenovo is supposed to provide EMC's networked storage solutions to its customers, initially in China and then in other global markets. Both companies are supposed do R&D in servers and storage.

Finally, EMC and Lenovo plan to bring "certain assets and resources" from EMC's Iomega business into a new joint venture that will provide Network Attached Storage (NAS) systems to SMBs and distributed enterprise sites, where EMC is seeing rising demand and use of infrastructure-demanding private clouds.

Lenovo wants to be "a leader in the new PC-plus era." EMC expects to significantly expand its presence in China.

The $30 billion-a-year Lenovo will put cash in the joint venture. EMC will contribute those Iomega assets and resources. Lenovo will have the majority interest - presumably that means 51% - and can probably expect better margins than it's used to from PCs. Lenovo currently ships more PCs than anybody else except HP.

It's expecting to see billions from the partnership and wants to grow its 15% share of the Chinese server market to a position of dominance, spring-boarding it into the global market.

Ostensibly Lenovo is replacing Dell, whose close partnership with EMC fell apart because of Dell's storage acquisitions. Of course Lenovo's SMB markets are sexier than Dell's, especially since Europe is such a downer.

Interesting news

Posted via email from Larkland Morley's posterous

Sunday, July 29, 2012

Who Ultimately Pays for Cloud Computing It Depends Forbes

I am an author and independent researcher, covering innovation, information technology trends and markets. I also can be found speaking (and listening!) at business IT, cloud and SOA industry events and Webcasts. I serve on the program committee for this year's SOA & Cloud Symposium in London. I am also one of 17 co-authors of the SOA Manifesto, which outlines the values and guiding principles of service orientation in business and IT. Much of my research work is in conjunction with Unisphere Research/ Information Today, Inc. for user groups including SHARE, Oracle Applications Users Group, Independent Oracle Users Group and International DB2 Users Group. I am also a contributor to CBS interactive, authoring the ZDNet "Service Oriented" site, and CBS interactive's SmartPlanet "Business Brains" site. In a previous life, I served as communications and research manager of the Administrative Management Society (AMS), an international professional association dedicated to advancing knowledge within the IT and business management fields. I am a graduate of Temple University.

The author is a Forbes contributor. The opinions expressed are those of the writer.

Very good question on how the cost for cloud services are accounted for..

Posted via email from Larkland Morley's posterous