Cloud Native Analytics and Cost Reduction

How Cloud-Native Sisense Reduces Costs

The Sisense architecture on Linux presents a modern cloud-native design that helps lower the TCO (Total Cost of Ownership) for your Sisense deployment. A multi-node, high-availability deployment of Sisense on Linux could cost about half as much as the same deployment using the

Windows architecture. Major contributors to the cost reduction are lower cost per machine, autoscaling, and improved machine utilization.

Lower Costs of Linux vs Windows

It’s very common for cloud providers to offer Windows and Linux options when customers are putting together a package, usually choosing their OS based on what their IT and data engineering teams are most familiar with (AWS, Azure, and Google Cloud all offer both Linux and Windows OS machines). Just going with whatever OS your team is most comfortable with can seem like a simple choice, but financially it’s a huge decision: While pricing for Windows machines includes licensing and maintenance fees that go to Microsoft, the costs for machines running open-source Linux don’t. Linux being open-source and thus free of licensing fees can cut your hourly cost to about half the price of using the same machine running Windows on all of the major cloud vendors.

Autoscaling for Peak Times

The great promise of cloud computing and storage was that tech companies could get the power and storage they needed, when they needed it, without paying for machines they didn’t need at other times. Here, Sisense’s cloud-native architecture comes into play again: The Kubernetes orchestrator in a Linux high-availability deployment allows a new kind of resource planning, taking into account peak Sisense usage times.

Peak usage can take a variety of forms. First off, you could be seeing a large number of ElastiCube data models being built at once, creating a demand for extra CPU and RAM resources. Multiple people all working on data models at the same time is almost impossible to plan for. Alternatively, it’s common to see dashboard views spike at the end of the month, which requires extra query nodes. With Linux autoscaling and Sisense, there’s no need to deploy resources in advance (that will be idle most of the time) just to support max usage at peak times. This results in large cost savings versus reserving fully dedicated machines or adding permanent RAM and CPU resources to query and build nodes. It’s a decision that makes the most of the unique capabilities of Linux and Sisense.

Maximizing Resource Utilization

In some cases, a machine’s resources (RAM or CPU) are not fully utilized, but the maximum number of supported ElastiCubes per server is the limitation. In those cases, the Sisense Windows architecture (which supports a maximum of 40 ElastiCubes per server) requires deploying more nodes, even if RAM and CPU utilization is low. Cloud-native Linux architecture allows a single node (query or build) to support up to 200 ElastiCubes, meaning that the same number of cubes can be run on one-fifth of the machines a Windows deployment would require. This results in cost savings of x2 to x5, depending on the data sizes of the cubes in the deployment.

Want to learn more?

interested in analytics?

Too many spreadsheets? Not enough visability? 

FREE
Consultation