Mon, 30 July 2012
Dave: Amazon Keeps in Investing in the Cloud
Amazon didn’t wow the street with its Q2 earnings news but the company will definitely keep investing in its cloud infrastructure. For its upcoming quarter, it plans to spend a strapping $800 million to $900 million capex money on technology, said CFO Tom Szkutak.
The company does not — it probably cannot — break out how much of that tech spending flows into the Amazon Web Services computer infrastructure it rents out to customers vs. the IT it uses to run its gargantuan retail business.
One thing is clear: While Amazon is the market-leading cloud service provider, other companies with deep pockets — Microsoft, Rackspace, Google, Hewlett-Packard among them — are willing compete with Amazon in this space.
Mobile was a hot topic this year at Black Hat with a strong focus on client-side vulnerabilities and defenses. Apple made their first ever appearance at Black Hat with platform security manager Dallas De Atley walking attendees through the layered approach Apple has taken with iOS and the iPhone. Security has been one of the key deficiencies critics mention when discussing Apple and the enterprise, given that the platform was less mature than RIM’s who have been entrenched in the enterprise. De Ately’s presentation shows that Apple is serious about security and the enterprise and that that the iPhone and iOS are ready for business.
As the popularity of mobile devices increase the size of the server infrastructure to support services such as the iCloud, push services and the like increases exponentially. How much data do we really store on our devices vs. the Cloud? The bulk of our sensitive data is not only on our devices but spread across servers around the world, across multiple companies, platforms and with differing levels of security.
As more devices are sold that rely on this infrastructure it becomes an increasingly valuable target for malicious attackers. Think about it - people often have simple passwords and password aggregators and don’t bother understanding how and when the data is encrypted on their phone as much as they do over their laptop. There are so many ways into a person’s smart phone it’d almost be more secure to run a hardened VM rather than a mobile OS with apps that rely on insecure custom security approaches. Why attack a single device when you can compromise an entire infrastructure and potentially gain access to a much larger trove of data, number of devices and users?
I’ve had discussions with Symantec rep - David Finn, who used to be CIO for Children’s in Texas, around how to secure pervasive computing in healthcare. There are vendors out there doing good work, however, there are also a lot of choices that create unsustainable IT operations scenarios. Security is about understanding the risk, and accepting a mitigation plan. At the end of the day it really behooves an organization to use the same security stack for mobile platforms and related servers as they use for their core infrastructure. Otherwise you might as well just plan on doubling your IT Security group because you’ve increased the required skills and scope.
VMware to Acquire Nicira / Cloud and Software Defined Networking
VMware’s proposed acquisition of Nicira has raised the level of attention being paid to network virtualization in cloud computing. Nicira has been around awhile solving complex network virtualization problems for customers, but until this announcement the only people aware of them were mostly networking professionals. It’s only now that cloud architects are starting to say, “hey, what is that stuff and why is it important to how I build my cloud”. On one hand, additional layers of abstraction and un-optimized software seems counterintutive to networking, which has predominately required high performance. This thinking is indicative of traditional network designers. Only when you truly understand the value of parallelism provided by elastic cloud envirionments can you begin to understand that 500,000 packets flooding a single entry point, yes, needs very high performance, but 500,000 packets flooding 500 nodes becomes extremely manageable. Moreover, the ability to redefine the physical endpoints on the fly through SDN provides remarkable power in allowing applications to scale more effectively in a cloud environment.
Doesn’t fix bandwith or storage issues.
Jeff: NewSQL and NoSQL Difference
There is a lot of talk about cloud databases however most of the conversation is focused on the NoSQL dbs. The likes of Hadoop, MongoDB and Coudbase dominates the chatter. However is another category of dbs that folks need to start taking a look at. I will admit of me these NewSQL databases are immature so I will not attempt to put them in production use cases. However its still wealth taking a look at for those unique use cases that either a traditional SQL db nor the NoSQL database fits. Examples of these dbs are the NuoDB and Akiban NewSQL. These NewSQL dbs have different and unquie capabilities however they share similar goals. To be ANSI SQL compliant, support transaction processing (OLTP) and ability to store and process massive (petabyte) data load.
Direct download: Cloud_Computing_Podcast_Ep_206.mp3
-- posted at: 10:00 AM
Mon, 23 July 2012
Dave: Welcome to the Cloud Computing Podcast, your one stop for news, information, and advice on how to make your way through the emerging world of cloud computing.
This is Ep. 205, and it’s Friday July 20th, 2012, my name is Dave Linthicum, SOA, cloud computing SME, author, CTO and founder of Blue Mountain Labs. With me is my special guest Geva Perry.
Geva Perry has more than 15 years of experience as an executive in the enterprise software industry. His blog, Thinking Out Cloud, on cloud computing and software-as-a-service strategy and marketing is widely read and he is a frequent speaker on the topic. Geva has been named as one of the Top 25 Most Influential People in the Hosting Industry, Top 50 Cloud Computing Bloggers and one of the 12 Top Thinkers in Cloud Computing.
Geva has been an advisor and board member to several cloud computing startups including Heroku, Twilio, New Relic, Xeround, BlazeMeter and Garantia Data. You can follow him on Twitter at http://twitter.com/gevaperry.
- Cloud Open next Month in San Deigo
- Cloud Connect in September in Chicago
- InterOp in New York City
Don’t forget, you can contact us at: email@example.com.
Also, please make sure to rate us on iTunes and like us on Facebook.
[[Geva: Add or edit. ]]
Amazon Web Services Thursday released a solid state drive-backed cloud compute offering that one industry watcher says is among the highest capacity ones on the market.
AWS's High I/O Quadruple Extra Large instance in Elastic Cloud Compute (EC2) includes 2TB of local SSD-backed storage with 60.5GB of RAM running on eight virtual cores. AWS says it can achieve as many as 120,000 random read input/output operations per second (IOPS) and between 10,000 and 85,000 write IOPS. In announcing the offering, Amazon officials wrote that it's aimed at NoSQL databases such as Cassandra and MongoDB.
Recent Cloud Outages...A Trend?
Direct download: Cloud_Computing_Podcast_Ep_205.mp3
-- posted at: 2:21 PM
Mon, 16 July 2012
The Department of Defense announced today the release of a cloud computing strategy that will move the department’s current network applications from a duplicative, cumbersome, and costly set of application silos to an end state designed to create a more agile, secure, and cost effective service environment that can rapidly respond to changing mission needs. In addition, the Defense Information Systems Agency (DISA) has been named as the enterprise cloud service broker to help maintain mission assurance and information interoperability within this new strategy.
Chris: 10 Ways the ACA Ruling Could Stimulate Health Care IT
As many in healthcare IT knows by now, Obama’s Patient Protection and Affordable Care Act were upheld in a 5-4 decision on June 28th. What does this mean to the cloud? Here’s an interesting rundown of 10 items that resonate well - these are all things I’ve been working on at BML over the past year:
- Health Insurance Exchanges (HIX) will proliferate (think Expedia for Insurance)
- Accountable Care Organizations (ACOs) will increase demand for HIXs (providers have never been incentivized to work together before it was always a competitive advantage to “own” a patient) This is creating statewide Health Information Exchanges (HIEs) as mandated by Meaningful Use
- Big data demands will surge. Patient intelligence and outcome management require a level of business intelligence and data mining that makes it clear the fields of data science and clinical informatics are at a nascent stage
- Telehealth will expand. Emergency rooms and in-patient services will initially rise in demand as 33 million more people are required to have insurance. There’s a virtual killing to be made in telehealth, no pun intended.
- Patient centered medical homes will evolve. Remote monitoring and mobile health apps (mHealth) are already becoming available today. Expect a boom in this area as doctors are encouraged to focus on outcomes instead of fee for service due to the ACO.
- Predictive analytics will become a big spend item in order to reduce readmissions. Not everyone is aware that for chronic diseases - if a patient is readmitted there are regulatory, incentive, and watchdog programs that prevent the system from getting reimbursed for visits beyond the first discharge if they happen too close together. Indeed, in some cases systems can be fined for the lack of quality outcome management. To reduce litigation also is a big driver, thus expect more predictive analytics - another nascent industry at best.
- More EHR adoption. Meaningful use is pushing systems to use them, and there’s been an uptake as fines start to kick in if healthcare organizations don’t start using them over time. Sadly, a majority of major EHRs are still based on legacy technology. From derivative archaic MUMPS databases to user experiences that clearly didn’t have any clinicians involved in the user acceptance testing EHRs are 15 years behind every other industry’s technology. Expect more cloud options to appear and major EHRs that don’t evolve to be disintermediated.
- There will be a surge in healthcare IT jobs. Virtualization doesn’t decrease the need for skilled workers, it increases it as people manage more the VMs proliferate, especially in multi-tenant hypervisor environments. There’s this fear that cloud technology threatens the jobs of IT workers as well - it’s quite naive when people say “so and so installed a cloud app and then fired half of their staff”. Simply doesn’t work that way - you use cloud to do more with your current workforce, and that’s the need - the “work” in health IT is expanding geometrically and if you don’t leverage cloud your IT workforce will as well.
- Chronic disease monitoring will push innovation and monitoring. Risky patients in an ACO world are not forsaken, but the risk does get stratified such that they can get more healthcare resources dedicated to them. The best approach is a system that allows doctors to monitor chronic disease patients smoothly and also assess their level of compliance with treatment and discharge instructions - mobile health apps and stores like Happtique are making this easiers. Doctors can actually be reimbursed for prescribing health management applications now.
- Revenue cycle management capabilities will surge. Already most revenue cycle departments in major healthcare systems are a mess when it comes to application topology. This isn’t any better in the payer world. The whole process has to be under the hood now from pre-certification to final payment. This means it’ll be easier for patients to understand their invoices as they aren’t getting double billed any longer by an insurance company and an impatient provider, however, most providers do not have mature enough revenue cycle departments to handle this new level of sophistication.
Direct download: Cloud_Computing_Podcast_Ep_204.mp3
-- posted at: 11:11 AM
Mon, 9 July 2012
Dave: AWS Outage
For the second time this month, Amazon.com's Northern Virginia data center suffered an outage caused by a line of powerful thunderstorms coming through the area late Friday night. Disrupted services included Elastic Compute, Elastic Cache, Elastic MapReduce, and Relational Database Services. The outage has affected Instagram, Pinterest, Netflix, and Heroku, used by many startups and mobile apps. (The previous Amazon Web Services outage in Amazon.com's northern Virginia facilities occurred on June 14.)
Chris: VMware Acquires DynamicOps in Cloud Play
This is an interesting story because it allows us to highlight a few misunderstood concepts related to IaaS.
The quote is: “"As IT organizations evolve from builders to brokers of services, many seek to provide access to diverse cloud resources in a controlled, managed fashion," said Ramin Sayar, vice president and general manager, Virtualization and Cloud Management, VMware. "DynamicOps' multi-cloud and multi-platform capabilities help to strengthen VMware's position as the infrastructure and management vendor of choice for cloud computing."
Now, while I like VMware for virtualization I have not been easy on VMware regarding cloud in the past. Noone has - virtualization does not equate to cloud. It does help by encapsulating the servers and offers tremendous savings in power, cooling, and compute if you go uniform but it doesn’t get you anywhere closer to being able to scale business capability out on demand. You still have to scale up as the applicaiton servers are not all analogous to web servers - in the healthcare industry this is like having to install an EHR for every 2 or 3 hospitals as most peak out around 500 beds. Scaling out business capability is accomplished at the PaaS, SaaS, and BPaaS layers. Configurations, or as Chef calls them “recipes”, for infrastructure are just the first thing you figure out. Configurations for middleware, applications, and business processes are more important to business outcomes, and that’s what CMP and BPM get you.
All that said, this acquisition is a step in the right direction as it is yet another force bringing cloud interoperability together.
I’ve heard a lot recently that the hypervisor is a non-factor, they all work and work relatively well now. The differentiator in the virtual data center now is all about virtual infrastructure management (VIM), and that comes down to your management and monitoring platforms. For vCloud shops what this means is that they can now manage Oracle and Citrix.
This gets even more interesting when you look at the cloud management platform (CMP), which is not the same thing as a virtual infrastructure management capability. Using tools like Chef and Puppet in conjunction with VIM tools really lets you master your orchestration of servers and choreography of services. The “Big Four” in that space are IBM, HP, BMC, and CA - I don’t think they have the best tools in the CMP space I think that whenever there’s a gap recognized they buy a company though and that has gotten them market dominance in the short to mid term.
In summary - good move by VMware to position themselves as the VIM of choice but I still don’t see any true CMP capabilities and that shouldn’t be ignored.
Jeff: MapR's Google Deal Marks Second Big Data Cloud Win
MapR's latest deal is tied to Google's big June 28 announcement of the Google Compute Engine, new infrastructure-as-a-service (IaaS) that sets up the search giant as a public-cloud rival to Amazon Web Services (AWS). MapR is one of at least six partners debuting services on the Google infrastructure, which is currently in limited beta release. MapR and Google are currently signing up customers to join a private preview of the Hadoop services that will run on Google Compute Engine.
News of the Google partnership came just two weeks after MapR and Amazon announced that services based on its M3 and M5 Hadoop software distributions would be available on AWS.
MapR distinguishes itself from Hadoop software distribution and support competitors Cloudera and Hortonworks by providing high-performance options not supported on standard Apache open source Hadoop software. MapR's M5 distribution, for example, replaces the Hadoop Distributed File System (HDFS) with a derivative of the Unix-based Network File System. M5 includes snapshotting, mirroring, and other high-availability features that aren't currently supported on the current (1.0) Hadoop code line
Pricing and service details have not been finalized for MapR's services on the Google Compute Engine. Basic compute pricing on the Compute Engine starts at $0.145 per hour for a single core with 3.75 gigabytes of memory.
Direct download: Cloud_Computing_Podcast_Ep_203.mp3
-- posted at: 10:00 AM
Mon, 2 July 2012
With me is my special guest James Urquhart :
James is a seasoned field technologist with 20 years of experience in distributed systems development and deployment, focusing on service-oriented architectures, cloud computing, and virtualization. Prior to joining enStratus, Mr. Urquhart held leadership roles in cloud computing for Cisco Systems. Urquhart also held leadership positions at Cassatt Corporation, Sun Microsystems and Forte Software.
Named one of the ten most influential people in cloud computing by both the MIT Technology Review, The Next Web and Wired Cloudline, and popular contributing author to GigaOm, Urquhart brings a deep understanding of this disruptive IT model and the business opportunities it affords.
Don’t forget, you can contact us at: firstname.lastname@example.org.
Also, please make sure to rate us on iTunes and like us on Facebook.
- Review of GigaOM Structure Conference.
- Recent Google IaaS announcements.
- DevOps Days.
Direct download: Cloud_Computing_Podcast_Ep_202.mp3
-- posted at: 10:00 AM