In my last post, I sketched out the case for investing in detailed planning before moving an estate to the cloud. This time I’d like to focus on the first step of that planning: answering the “What have we got?” question.
To build a thorough understanding of your starting point, you need to gather accurate and comprehensive information about your current IT assets and how they are used. For this, you need an Asset Intelligence platform that provides you with granular, deep-dive usage metrics and profiles, right down to the level of individual keystrokes and clicks. The platform should enable you to easily discover assets wherever they are – including not just conventional on-premise assets but also SaaS, cloud and mobile.
You should be able to do all this with a single analysis tool, so that you get a unified view of your estate – both before and after the move to the cloud – presented on a single dashboard. This is what we at Scalable refer to as the “digital fingerprint.” Without undue effort on your part, your tool should inform across five essential data points that will provide a complete picture of usage, enabling intelligent analysis of your IT assets:
Your Asset Intelligence platform should come with strong normalization capabilities; it should be able to provide you with the names of licensable products associated with each user, device or location, rather than a list of low-level components that needs a lot more processing to yield any insight. A platform should be able to recognize most applications out of the box, and allow you to identify your own applications to be monitored as well. You should get information about release and end-of-life dates, licenses and locations, in addition to meaningful taxonomical data.
As well as telling you what hardware and software is in use where, your platform should also give you enough usage information to determine when someone is allocated more (or more expensive) resources than they really need. For example, with Office 365 it may be that some people are using full-blown Office applications on a PC when all they really need is the browser-based versions. You will also need to collect performance data, gathering metrics about your utilization of each element of your infrastructure including CPUs, memory and network bandwidth. Make sure you understand how performance varies over time and how peaks and troughs in usage affect it.
Collecting this data will help you plan what services and resources you will need from your cloud vendor. It also provides a baseline for post-cloud migration performance validation. Your organization may already have the necessary tools in place because of the need to collect performance data for use in your day-to-day operations.
The analysis described above doesn’t require as much effort as you might think, provided you have the right Asset Intelligence platform. The analysis will pay dividends, since you will be in a position to understand your current hardware and software consumption, and what it currently costs you, with a high level of granularity. These insights will provide a firm foundation for your move to the cloud.
In my next post, I’ll consider the question, “What should I move?”
“Everything” is often the wrong answer, but decisions about what to leave out can be challenging. To find out more in the meantime, please download our Smart Guide: Essential considerations for cloud migration planning and cost optimization.
Read the other posts in the Cloud Migration and Cost Optimization series: