If you work at an organization that values data you likely have been asked at one point in time to run a report. The report you ran likely answered some key business question like how many sales you processed last month or data on your key customer accounts.
What about ad-hoc reporting? Where did you go when you had a specific question and no pre-built report to answer it? Some organizations build large Data Warehousing applications that churn data from multiple sources and form what’s called a “Cube” or aggregate data store. This is simply pre-run aggregates (sums, averages, counts, etc.) across large data sets so that querying is faster than trying to search the base data (often millions of rows). This Cube was something that was built typically on a batch basis (weekly, nightly, hourly) depending on need and technical limitations of the environment. The IT environment could only afford so large servers to process the massive data to run the data import and analytic jobs.
Sounds like heavy computing. How could the Cloud help?
One of the unique and very valuable attributes of Cloud Computing is that it is elastic. This means you can spin up resources when you need them and spin them down when you don’t. I’ve worked on many data warehouses in my career and most of them are very cyclical in their computing needs. For 6-12 hours a day (typically at night) the warehouse is churning data as fast as it can to build the aggregate cube for end users to use when they arrive at work in the morning. The rest of the day the cube queries are light load in comparison as end users query aggregate data.
What if you could increase your compute capacity by 20 fold for those 6-12 hours at night when the data needed to be loaded? Would that cut down on the window? Could you load more data? Could you run more analytics?
What would you do with the cost savings for shutting off the data load servers during the day? Soon, your CFO will be asking. What will your answer be?
Learn more @ 2ndwatch-com-staging.z1t5mcsx-liquidwebsites.com