So what is a Cloud Reference Architecture?
When you consume services from a Public Cloud Provider do you really care or want to understand what the underlying architecture looks like? Probably not.
However if you wanted to build your own Private or Community Cloud with Public Cloud Compatibility (Hybrid Cloud) then you would want to have a reference architecture that you could use to align with strategy, to deliver quick wins, and ensure that each tactical decision was made with a full understanding of the knock-on implications. You would also want this architecture to support all of your base Operating System needs, UNIX, Linux, Microsoft, so creating a generic Cloud Reference Architecture that could be leveraged to deliver true business benefit.
My previous post talked about Converged Infrastructures for UNIX in the Cloud and whether or not we are ready to make this migration. The image to the left is a Cloud Reference Architecture that I use as a day-to-day guide when developing cloud architectures and solutions for my clients and is a great source of debate and helps to tease out many of the component areas that are either forgotten or ignored during the definition and design process.
As you may notice there isn't an 'Operating System' component rather a 'Workloads' super-component that simply looks to consume the compute, storage and network component parts which is then supported and managed by the 'Core Infrastructure' super-component.
This simplifies the overall architecture and begins the ease the introduction of the conversation around converged infrastructure.
If all of your 'Server Workloads' and 'Desktop Workloads' could run on a single set of Compute, Storage and Network components the 'Core Infrastructure' super-component becomes far easier to define, design and operate. Rather than having a plethora of architectures, tools and processes, you can begin to align each competency and start to deliver increased value.
This value can be further increased by the ability to utilise spare compute and storage resource easily. Business Services that require increased resources to meet peak processing demands can easily access spare CPU cycles, Memory or Storage. This is of particular interest when you being to move large ERP systems such as SAP and Oracle onto a converged infrastructure. More often than not these systems are sized to 'peak' and more often than not that 'peak' is an annual occurrence meaning that a lot of compute and sometime storage capacity is going unused for 11months of the year.
Identifying that spare processing capacity, gaining access to it and protecting service level agreements whilst you do so will deliver huge financial benefit to your organisation.
‘For more information please click - HERE’
‘This post is brought to you in partnership with Intel(R) as part of the "Technology in tomorrow's cloud & virtual desktop" series’