Fri, 16 December 2011
The latest version of Google's SDK (software development kit) for its cloud platform App Engine includes the High-Replication Datastore, which has been generally available, the company said in a blog post on Tuesday.
When using the High-Replication Datastore, data is replicated across multiple data centers. This provides the highest level of availability, but comes at the cost of higher latency due to the propagation of data, according to Google.
One of the most significant benefits of the High-Replication Datastore is that applications remain fully available during planned maintenance periods, as well as during unforeseen problems such as power outages, Google said.
Expanding on its offerings to small business partners, networking specialist Cisco announced a cloud-based service called OnPlus that offers channel partners a way to provide network assessment, management and advisory services to their small business customers. By enabling value-added resellers (VARs) to create or expand their managed services practice, OnPlus aims to help to evolve the customer relationship from reactive and tactical to more proactive and strategic.
OnPlus Service is offered at a list price of $250, which includes a three-year subscription to the OnPlus service and an OnPlus Network Agent appliance. A separate appliance and subscription service is required for each network being managed. Native applications for Apple and Android mobile devices are available free of charge in the Apple App Store and the Android Market.
The announcement builds on Cisco's Partner Led sales model designed to elevate channel partners' ability to drive sales in the small business and midmarket segments.
OnPlus is designed for VARs that are looking to create or expand their managed service offerings, providing remote visibility of the network and the devices attached to the network, through a scalable cloud-based service, OnPlus helps VARs deploy advanced network services for their small business customers from anywhere at any time. To monitor a customer network, VARs plug the OnPlus Network Agent appliance into a switch or router on their customer's network. The OnPlus Agent then transmits information about the customer's network to a secure data center for access by the VAR.
In addition to discovery and monitoring of anything with an IP address from any supplier, OnPlus enables remote connectivity to manageable network devices to facilitate troubleshooting and configuration. For select Cisco devices, OnPlus provides enhanced capabilities that automate typical administrative tasks. In addition, the network-centric capability of OnPlus complements existing classes of managed services tools such as remote monitoring and management and professional services automation.
"Liberty Technology focuses on making technology easy for both consumers and businesses. With Cisco OnPlus, you're able to get a more complete, 360-degree picture of your customer's network," said Ben Johnson, president of Liberty Technology, a Cisco certified partner. “We've used OnPlus in a number of scenarios from doing network surveys to quickly troubleshooting and identifying problems with customer's networks, which has greatly saved us time and of course money."
IBM is buying Emptoris, a specialist in supply chain and contract management analytics, in what is the latest example of an entrenched IT provider buying analytics and cloud capabilities.
Emptoris bills itself as an analytics supplier and, according to its website, delivers its software in on-premise, hosted, and software-as-a-service (SaaS) models. This news comes just a week after IBM announced its $440 purchase of DemandTec, a supplier of web-based analytics for retailers.
This overwhelming need for better, glitzier analytics helped drive IBM’s earlier purchase of Netezza, EMC’s acquisition of Greenplum, SAP’s development of the well-regarded HANA, and Oracle’s decision to build Exalytics. Netezza, Greenplum, Hana, and Exalytics are all data analytics appliances.
Interesting aside for those of us in the Bay State: IBM as of now has bought 20 companies in Massachusetts since it snarfed up Lotus Development Corp. in 2003.
This land grab for analytics — especially in specialized areas, is bound to continue into 2012. Comments from industry leaders over the last few months, show that this M&A will only continue in this arena. Pat Gelsinger, president of EMC’s information and infrastructure products unit said he expects more action in what he called a $70-billion-and-growing market for analytics. Many visualization; data transport; and extract, transform and load (ETL) tools still “have to re-emerge in this big data domain,” he said.
Direct download: Cloud_Computing_Podcast_Ep_176.mp3
-- posted at: 6:49 PM
Fri, 9 December 2011
Well, those IT pros need to get over it, said Joe Coyle, CTO of Capgemini, the system integrator and IT consultant.
“Everyone is screaming for an accepted security model for the cloud, and I think it’s already here. People just need to take a deep breath,” Coyle said in an interview this week.
On the technology side, his only concern is at the hypervisor level, and even there, it’s not so much about security as it is about auditing. “You need good reporting and auditing tools so that providers can prove that virtual machine A doesn’t encroach on virtual machine B,” he said.
Virtualization is great at carving up a physical environment into multiple pieces, but moving that technology into a shared environment opened a whole can of worms where people worry about overlapping partitions and other things. Those tools are now becoming available, he said.
Chris: G.E. - Microsoft Venture to Create ‘Windows’ for Healthcare
Information technology in health care is as fragmented and balkanized as the health care system itself. The technology silos in health care lead to two afflictions — captive patient and medical information, and the inefficiency of having to tailor code and programs for a bunch of proprietary software systems.
General Electric and Microsoft are announcing a joint venture on Thursday intended to attack the silos. The venture will borrow from a familiar playbook. “This industry needs a Windows-like platform,” said Peter Neupert, the head of Microsoft’s health solutions group.
The “platform” layer takes care of the computer plumbing, so software developers can focus their efforts on the layer above that — on applications.That, in turn, can spur innovation and an ecosystem of developers and companies who build on top of the platform.
• Microsoft Amalga, an enterprise health intelligence platform
• Microsoft Vergence, a single sign-on and context management solution
• Microsoft expreSSO, an enterprise single sign-on solution
• GE Healthcare eHealth, a Health Information Exchange
• GE Healthcare Qualibria, a clinical knowledge application environment being developed in cooperation with Intermountain Healthcare (Salt Lake City, Utah) and Mayo Clinic
- You can no longer assume that computing capacity is dedicated to a group of users or a group of processes. Everything in a cloud computing environment is shared using some sort of multi-tenant model. This complicates capacity modeling and planning.
- With auto-provisioning, some aspects of capacity planning decrease in importance because capacity can be allocated as needed. However, because cost is a core driver for the use of cloud computing, using capacity that's not needed reduces the cloud's value.
- You can use cloud computing systems as needed to cost-effectively provide temporary capacity. Called "cloud-bursting," the cost of this type of architecture has been difficult to justify until cloud computing provided a cheaper "public" option.
Direct download: Cloud_Computing_Podcast_Ep_175.mp3
-- posted at: 7:13 PM
Sat, 3 December 2011
Indeed, two researchers at the University of Virginia and four at Microsoft Research explored this possibility in a paper reviewed at the Usenix Workshop on Hot Topics in Cloud Computing: "The paper looks at how the servers -- though still operated by their companies -- could be placed inside homes and used as a source of heat. The authors call the concept the 'data furnace.'"
The idea is that we'll have micro data centers, meaning small cabinets filled with servers where air flows over the servers to both cool the servers and heat the apartment, office building, or house. All that's needed is a broadband connection and the willingness to see hundreds of blinking lights where your furnace used to be.
Chris: HPCC Systems Tune Big Data Platform for Amazon
HPCC Systems, the division of LexisNexis pushing a big-data, processing-and-delivery platform, has tuned its software to run on Amazon’s cloud computing platform. Interested developers can now experiment with the open-source software without having to wrangle physical servers for that purpose, which brings HPCC one step closer to establishing itself as a viable alternative to the uber-popular Hadoop framework.
Hadoop has no shortage of startups, large vendors and individual developers committed to it already. That gives potential users the confidence that not only will Hadoop products be supported for a long time, but that the code will continue to improve and interoperate across a variety of different vendors’ data products, Hadoop-based or not.
Microsoft killing its Dryad data-processing platform to focus on Hadoop opened a door for HPCC Systems, but also served to block its entry into the room. Now there are really only two unstructured-data processing platforms of note, but having Microsoft on the Hadoop bandwagon is yet another sign Hadoop is for real.
Chris: Microsoft cloud to power environmental big data
Cloud computing can be a powerful tool for scientists and researchers sharing massive amounts of environmental data. At the United Nations climate conference (COP 17) in Durban, South Africa this week, The European Environment Agency, geospatial software company Esri and Microsoft showed off the “Eye on Earth” network. The community uses Esri’s cloud services and Microsoft Azure to create an online site and group of services for scientists, researchers and policy makers to upload, share and analyze environmental and geospatial data.
While the Eye on Earth network has been under development since 2008, the group launched three services for different types of environmental data at COP 17, including WaterWatch, which uses the EEA’s water data; AirWatch, which uses the EEA’s air quality data; and NoiseWatch, which combines environmental data with user-generated info from citizens.
Microsoft isn’t the only one working on creating these types of eco big data networks. At last year’s U.N. climate meeting, COP 16, Google launched its own satellite and mapping service called Google Earth Engine, which combines an open API, a computing platform, and 25 years of satellite imagery available to researchers, scientists, organizations and government agencies. Google Earth Engine offers both tools and parallel processing computing power to groups to be able to use satellite imagery to analyze environmental conditions in order to make sustainability decisions.
Why Some Executives think that Hadoop ain’t all that
And so the backlash begins. Hadoop, the open-source framework for handling tons of distributed data, does a lot, and it is a big draw for businesses wanting to leverage the data they create and that is created about them. That means it’s a hot button as well for the IT vendors who want to capture those customers. Virtually every tech vendor from EMC to Oracle to Microsoft has announced a Hadoop-oriented “big data” strategy in the past few months.
But here comes the pushback. Amid the hype, some vendors are starting to point out that building and maintaining a Hadoop cluster is complicated and — given demand for Hadoop expertise — expensive. Larry Feinsmith, the managing director of JPMorgan Chase’s office of the CIO, told Hadoop World 2011 attendees recently that Chase pays a 10 percent premium for Hadoop expertise — a differential that others said may be low.
Manufacturing, which typically generates a ton of relational and nonrelational data from ERP and inventory systems, the manufacturing operations themselves, and product life cycle management, is a perfect use case for big data collection and analytics. But not all manufacturers are necessarily jumping into Hadoop.
General Electric’s Intelligent Platforms Division, which builds software for monitoring and collecting all sorts of data from complex manufacturing operations, is pushing its new Proficy Historian 4.5 software as a quicker, more robust way to do what Hadoop promises to do.
“We have an out-of-the-box solution that is performance comparable to a Hadoop environment but without that cost and complexity. The amount of money it takes to implement Hadoop and hire Hadoop talent is very high,” said Brian Courtney, the GM of enterprise data management for GE.
Direct download: Cloud_Computing_Podcast_Ep_174.mp3
-- posted at: 11:50 AM
Mon, 21 November 2011
Direct download: Cloud_Computing_Podcast_Ep_173.mp3
-- posted at: 3:50 AM
Sun, 13 November 2011
Dave: Cloud Expo Roundup
Focus on Data
Focus on Implementation
Many New Products
Cloud Success Beginning to Show
Using Amazon's EC2 (Elastic Compute Cloud) can pose a security threat to organizations and individuals alike, though Amazon's not to blame, according to researchers from Eurecom, Northeastern University, and SecludIT. Rather, third parties evidently are not following best security practices when using preconfigured virtual machine images available in Amazon's public catalog, leaving users and providers open to such risks as unauthorized access, malware infections, and data loss.
By 2014, almost one in three midsize businesses will be using a recovery-as-a-service (RaaS) with the ability to backup and restore virtual machines (VMs), according to Gartner.
Gartner predicted that 30 percent of companies will use RaaS over the next few years, with the market being driven, for now, by midsize companies. The research firm defines those as having annual revenues between $150 million and $1 billion.
Today, just over 1 percent of midsize businesses use RaaS as part it their operations. The service, which allows the managed replication of VMs to a service-provider's cloud, can eliminate the need to pay as much as $100,000 a year as part of an in-house disaster recovery budget, Gartner said.
Direct download: Cloud_Computing_Podcast_Ep_172.mp3
-- posted at: 12:46 PM
Sat, 5 November 2011
Because businesses increasingly want to capitalize on information they don't own -- for example, a financial services firm going beyond its transactional data to analyze social data to better understand what customers like and don't like -- DaaS is likely to thrive.
How should IT and business users prepare for DaaS? Here are some recommendations from consultants and other experts.
1. Create a "data mind-set"
2. Don't neglect infras tructure
3. Try before you buy, check references, and insist on SLAs
4. Build a strong governance mechanism
5. Emphasize data quality
6. Ramp up your analytics skills
7. Know when to use DaaS and how to measure results
Imagine plopping down your credit card to turn on compute services late at night when there's no time to get permission from your boss and then getting distracted before the weekend on another work emergency. On Monday, when you remember you signed up for the services, which you intended to use for just a short time, you discover you've racked up $5,000 in charges on your personal card.
Developers can use Cloudability, launching Wednesday in an open beta, to track all of their cloud services from one place and to sign up for alarms when they reach spending thresholds. The service also points out unused or underused services, specifying exactly how much money a user can spend by turning off the services.
Can ARM wrestle its way into the server market? Calxeda and Hewlett-Packard think so. On Tuesday Calxeda launched its EnergyCore ARM server-on-a-chip (SoC), which it says consumes as little as 1.5 watts (and idles at half a watt). And HP, the world’s largest server maker, committed to building EnergyCore-based servers that will consume as little as 5 watts when running all out. Compare that to the lowest-power x86 server chips from Intel, which consume about 20 watts but deliver higher performance.
Richard Fichera, the VP and an analyst at Forrester Research, said Calxeda did its homework. “This looks to be at least three to five times more energy efficient than other chips and [energy use] is a growing concern for data centers.” Some of what Calxeda has done will be hard for competitors to replicate, he said.
Direct download: Cloud_Computing_Podcast_Ep_171.mp3
-- posted at: 1:29 PM
Sat, 29 October 2011
Pitz and Fitzgerald projected AWS would account for $751 million of a total $1.2 billion for “Other” in 2011. However, this quarter’s 70 percent year-over-year increase resulted in third-quarter revenue of $407 million, bringing total “Other” revenue to $1.07 billion for the year thus far. If it grows by another 70 percent in the fourth quarter, “Other” will do $546 million for the quarter and almost $1.6 billion for 2011. If UBS’s percentages of AWS revenue to total “Other” revenue are correct, AWS might hit the billion-dollar mark this year. Last year, by comparison, “Other” grew 48 percent year over year in the third quarter, and 39 percent in the fourth quarter. Even if it doesn’t grow at all year over year in the fourth quarter, though, it will hit more than $1.3 billion for the year. In-Stat recently predictedthat Infrastructure as a Service will be a $4 billion market by 2015, but that might end up being too small a number if AWS continues its rapid revenue climb. The UBS projections, which now look low, have AWS doing close to $2.54 billion in 2014.
Commissioned by Dell and Intel, "The Evolving Workforce Report" (part one of a series) aims to identify and explore future trends and themes pertaining to the workplace and workforce, honing in on the role technology plays. As part of that trend -- what the report refers to as "crowdsource services" -- full-time IT departments will be supplemented or replaced by far-flung contract freelancers or teams that are thrown piecemeal projects on-the-fly in JIT (just in time) fashion.
What's more, the traditional nine-to-five schedule with employees working in at computers on their desks, in primarily siloed fashion, will continue to fade away. Instead, workers will have more flexible schedules and able to do their tasks via any number of computing devices at all hours of the day. Employee performance will gauged by output instead of hours logged.
Researchers from the Horst Goertz Institute (HGI) of the Ruhr-University Bochum (RUB) in Germany have demonstrated an account hijacking attack against Amazon Web Services (AWS) that they believe affects other cloud computing products as well.
The attack uses a technique, known at XML signature wrapping or XML rewriting, that has been known since 2005 and exploits a weakness in the way Web services validate signed requests.
The flaw is located in the WS-Security (Web Services Security) protocol and enables attackers to trick servers into authorizing digitally signed SOAP (Simple Object Access Protocol) messages that have been altered.
Direct download: Cloud_Computing_Podcast_Ep_170.mp3
-- posted at: 3:07 PM
Mon, 24 October 2011
- According to IDC
- Cloud computing will ensure growth in the storage sector for the next five years
- increased investment in private cloud infrastructure and the growing volumes needed by public cloud providers would lead to combined storage spending of $22.6 billion by 2015
- Public cloud storage growth = 23.6% while private cloud storage growth = 28.9%
- SaaS is the big driver, online apps store video, pics, and music
- Richard Villars, vice president of storage systems at IDC:
- "The challenge facing the storage industry will be to balance public cloud service providers' demand for low-cost hardware while boosting demand for advanced software solutions in areas such as object-based storage, automated data tiering, big data processing and advanced archiving services,"
- big data [is] “perhaps the most critical marketplace” for storage vendors for the next ten years, so they must put these technologies as a “high priority.”
- Five requirements for storage:
- Enabling more efficient delivery of information/applications to Internet-based customers
- Reducing upfront infrastructure investment levels (i.e., cutting the cost and time associated with deploying new IT and compute infrastructure)
- Minimizing internal IT infrastructure investment associated with "bursty" or unpredictable workloads
- Lowering and/or distributing the ongoing costs associated with long-term archiving of information
- Enabling near-continuous, real-time analysis of large volumes and wide varieties of customer-, partner-, and machine-generated data (Big Data)
- Chris take
- Data in those applications is coming from where? From storage systems that are currenlty owned by the corporations that own the applications - so IDC is making a statement about out with the old and in with the new.
- My question - is this highly scalable low cost hardware going to balance out to the same industry in 5 years or is this just a cloud stimulus effect that will result in a more centralized environment with lower margins?
Direct download: Cloud_Computing_Podcast_Ep_169.mp3
-- posted at: 5:48 PM
Sun, 16 October 2011
Oversharing is already epidemic. But with iCloud, sharing by default could ruin everything
Are information technology departments worldwide ready for the cloud?
Despite a high level of interest in cloud computing, IT staffs within organizations say they simply are not ready for it, according to Symantec’s 2011 State of the Cloud Survey.
Less than 25 percent of the survey’s respondents say their IT employees have cloud experience and half of the respondents rated themselves as less than somewhat prepared.
As a result, most organizations are currently turning to outside resources for help. For instance, when deploying hybrid infrastructure or platform-as-a-service, about three in four respondents said they are turning to value added resellers (VARs), independent consultants, vendor professional services organizations or systems integrators.
Google has launched a new service to make its cloud computing platform more appealing to businesses. The company on Thursday introduced a limited preview of Google Cloud SQL, a scalable, hosted MySQL database environment.
Navneet Joneja, product manager for Google Cloud SQL, says that one of the most frequent requests from Google App Engine users has been for an easy way to develop traditional database-driven applications. Using App Engine, Google's platform-as-a-service offering, in conjunction with Cloud SQL allows developers to avoid the burden of database management, maintenance, and administration.
Direct download: Cloud_Computing_Podcast_Ep_168.mp3
-- posted at: 1:52 AM
Fri, 7 October 2011
Too many organizations overlook regulatory compliance issues when working with cloud computing vendors, says security expert Alastair MacWillson.
When relying on cloud computing partners, many organizations tend to overlook the responsibilities they bear for ensuring ongoing compliance with mandates such as the Payment Card Industry Data Security Standard and the Health Insurance Portability and Accountability Act, better known as HIPAA, says MacWillson of Accenture Technology Consulting.
Red Hat is acquiring privately owned storage vendor Gluster for approximately $136 million in cash to boost its cloud offerings, it said on Tuesday.
The combination of cloud computing and the explosion of unstructured data are forcing enterprises to find new ways to handle storage demands, and the acquisition of Gluster will allow Red Hat to address these challenges, according to Red Hat.
Gluster's software-only storage system lets enterprises combine large numbers of commodity storage and compute resources into a centrally managed and globally accessible storage pool, Red Hat said. Gluster's software also allows enterprises to move storage on to a public or a private cloud, or a mixture of the two, which are called hybrid cloud environments, according to its own website.
Oracle CEO Larry Ellison on Wednesday unveiled a public cloud service that will run its Fusion Applications and others, and while doing so delivered a withering broadside against competitors, with his harshest words for Salesforce.com.
"Our cloud's a little bit different. It's both platform as a service and applications as a service," he said during a keynote address at the OpenWorld conference in San Francisco, which was webcast. "The key part is that our cloud is based on industry standards and supports full interoperability with other clouds. Just because you go to the cloud doesn't mean you forget everything about information technology from the past 20 years."
In contrast, Salesforce.com's Force.com platform is the "roach motel" of cloud services, amounting to "the ultimate vendor lock-in" due to its use of custom programming languages like Apex. In contrast, the Oracle Public Cloud uses Java, SQL, XML, and other standards, Ellison said.
Direct download: Cloud_Computing_Podcast_Ep_167.mp3
-- posted at: 7:04 PM