As the use of cloud computing becomes more and more mainstream, serious operational "meltdowns" could arise as end-users and vendors mix, match, and bundle services for various means, a researcher argues in a new paper set for discussion next week at the USENIX HotCloud '12 conference in Boston.
"As diverse, independently developed cloud services share ever more fluidly and aggressively multiplexed hardware resource pools, unpredictable interactions between load-balancing and other reactive mechanisms could lead to dynamic instabilities or 'meltdowns,'" Yale University researcher and assistant computer science professor Bryan Ford wrote in the paper.
Chris: Dell to Host Agfa Medical Imaging Database in the Cloud
Dell has announced it will host Agfa Healthcare's medical imaging archive in the cloud.
The cloud will provide the storage capacity and processing power as doctors look for ways to make medical images compatible with electronic health records (EHRs).
Cloud computing will be an essential factor in this effort, especially when IT budgets are tight, according to Dr. Jamie Coffin, vice president and general manager of Dell Healthcare and Life Sciences.
"The world is moving to a patient-centric view of the [EHR]," Coffin told eWEEK. "You have to start to think about digital radiology, pathology, genomics and figure out how to store this in a format where you take and use it wherever your clinician is."
Storing the medical images in the cloud will allow doctors' offices or medical centers to manage data-intensive images when they lack physical space to store their own servers.
"A single pathology slide can be like 6GB of data because it's a high-resolution image," Coffin noted. "It really brings an ROI to the customers they've never been able to get before [from] on-site image management."
JP: Why You Really, Truly Don’t Want a Private Cloud
In this piece Jason Bloomberg takes on the justification for building a private cloud.
So, should any organization build a private Cloud? Perhaps, but only the very largest enterprises, and only when those organizations can figure out how to get most or all of their divisions to share those private Clouds. If your enterprise is large enough to achieve similar economies of scale to the public providers, then—and only then—will a private option be a viable business alternative.
I believe Jason ignores some very plausible components of a private cloud: a) putting everything outside the enterprise will increase latency for transfers necessary back inside the premises, b) certain applications will require to remain on premises and the integration is more complex when using a public cloud, c) VPCs are a misnomer, they’re virtual private networks, but the compute resources are still shared, d) big data applications have the overhead of moving the data into the public cloud before it can be used, whch requires shipping on disks or very large pipes and lots of time. His conclusion that only large enterprises should consider building a private cloud is naive and ignores some key benefits of building a private cloud for mid-sized enterprises. However, I will agree that no IT shop should be in the business of being an infrastructure service provider. If an I & O team has to offer IaaS, they’re are demonstrating the impact of their stratification. Automation is a good thing, but app dev is not I & O’s customer, the business is the customer for both of these organizations which need to work together in a DevOps manner.
Save this one for another time
[Is IaaS the appropriate model for enteprise IT?
I’ve been having this conversation a lot lately with peers. The accepted approach by most stratified IT shops is to adopt cloud computing bottom up. This means that it starts with infrastructure & operations and then moves up the hierarchy. This also implies that the primary consumer in the early adopter phase is the application developers / delivery teams. A healthy dose of pragmatism is required during the early adopter phase, but will this approach ultimately endanger the future of cloud computing, much in the way that SOA has not lived up to its potential to have a greater impact having been driven by the application development organization? Should a part of IT be in the business of simply provisioning infrastructure for other areas of IT or should IT as an organization use this opportunity to focus on how to deliver services better for business consumers? I’m not sold on a positive future for cloud building it out in this layered approach.]
Jeff: The time for NoSQL Standards is now
A transition to NoSQL (which I prefer to call NewDB) is inevitable. The relational database was created in an era of slow 10MB hard drives and low expectations. NoSQL is the stuff of the Internet age. NoSQL has been created for an era when storage is cheap, while performance and scalability expectations are high. It's written for an era of digital hoarders. What does NoSQL need to dislodge Oracle? Principally, the SPI (Service Programming Interface) for database drivers and APIs for major languages and platforms, as well as a standard query language.
Yet there are obstacles to this transition. First, NoSQL lacks a dominant force. For the RDBMS, no matter which product you choose, you have at least a subset of ANSI standard SQL on which you can depend. For any of the new databases, you may have Pig, Hive, SPARQL, Mongo Query Language, Cypher, UnSQL or others. These languages have little in common. For the RDBMS, you have some connector standard, at least, in the venerable ODBC. For NewDB, you must rely on a database-specific connector.
What's needed now is for the NoSQL vendors (10gen, Cloudbase, and so on), interested parties (such as SpringSource, Red Hat, Microsoft, and IBM), and various projects to come together, take some of these separate efforts, and propose standards. First, define the query level. Then define the connector standards.