Follow

What is the recommended server hardware requirement for eVouala?

What are the recommended server specifications for hosting the eVouala platform?  Our enteprise customers ask us this question on regular base.  There is no magic answer to this question, the minimum recommended configuration for decent performance would be a server with the following specs:

  • Ubuntu Linux Server OS
  • 8 GB RAM
  • 8 CPU cores
  • 100GB disk space in RAID 1 configuration
  • Fast internet connection (100Mbps and up)

Now, the real answer to this question is that there are three variables that come into play when spec'ing a machine for webmapping: RAM, CPU cores and Disk configuration. And you need to tune those variables to your specific needs.

  • RAM: Get as much RAM as you can. RAM is cheap and Linux will make use of it for caching disk access and increasing overall performance, especially with large datasets, both large postgis datasets and large imagery. The recommended minimum is 8GB, but if you can afford 64GB or even 128GB RAM then get it, you won't regret it.
  • CPU cores: The number of CPU cores to get is related to the number of concurrent users that you expect at any given time. For a low-traffic intranet instance with 5 or less concurrent users at any given time, 8 cores will do great. Add more cores in order to support more concurrent users. Please keep in mind that we are talking of "concurrent users" accessing the system here, which is different from the total number of user accounts. i.e. you can have an instance with 1000 user accounts where only 4 or 5 people use it regularily and in this case 8 cores will do great. OTOH, you can have a smaller organization with only 20 users, but all of them use the platform daily, and in this case, getting a config with 16 or more CPU cores would be a smart choice.
  • Disk configuration: Two variables come into play here: the amount of disk space required for your datasets (and backups), and the speed of the disks. The minimum space would be ~40GB, and with GIS imagery we can easily get into the TB so it depends on your expected datasets size.  With respect to disk config, the minimum is a RAID1 SATA2 (mirror) disk configuration. This will give acceptable speed for general use, but can easily be saturated when tiling datasets or doing large file processing operations. Then you can get into higher end RAID configs such as RAID10 (mirror+striping) which is popular. If you can afford them, SSD disks will give you top performance. A hybrid RAID array can be an interesting option for higher performance: SSD+spinning disks in a RAID config manged by the RAID adapter.

    Of course bonus points if you can afford to place your whole postgresql database on a pure SSD partition, but that can be expensive if you go beyond a few hundred GB of SSD space.

Conclusion:

The high end disk configs are likely to be the most expensive piece of the puzzle, so start by getting as much RAM as you can since the RAM is cheap and will help performance with disk access caching, and then look into getting a reasonable disk config based on your budget and expected needs. If in doubt, start with good spinning disks, and you can always upgrade with additional SSD disks later. With respect to CPU cores, they never hurt, but keep in mind that more cores are mostly useful when the number of concurrent users increase.

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.
Powered by Zendesk