Friday, June 27, 2008

TeamCity scalability: database and disk space

This post is inspired by recent "guidance on planning database space" questions in our community forums and support mail. It will also approach scalability and performance from the point of handling huge amounts of data that may be stored inside of TeamCity.

Information given here is obtained by observing JetBrains internal production server that is used for building most of our products (IDEA, Resharper, TC itself and others).

TeamCity database size depends on number and type of builds that are held in history. This number is increased with each build run and can be reduced during cleanup if cleanup policy has been set up. By default all information is kept forever and database will grow only.

On our server used database space size is typically growing during the day (proportionally to number of builds run), than dropping almost to the previous level during cleanup. Average is relatively slowly growing due to builds pinned and small amount of statistical data being accumulated.

After setting up new project or configuration (i.e. release branching) we usually notice rapid growth of DB size until reaching next plateau then new data is balanced by corresponding cleanup policy.

Typical policy for an active project is "Delete artifacts and history for builds older than 14 days since last build". Note that all pinned builds and all builds that have their artifacts used by other builds are unaffected by clenup.

We have about 8000 builds in history at the moment. The history is quite "dense" for last month - 50-100 builds per day for major projects. After some point in past most builds are cleaned and history is quite "sparse" and its "density" gradually decreases with descending into past. Deepest entries in our history are more than 3 years old. Build statistics on our server is kept forever (It's now about a year since it was first introduced).

Although in an ongoing development process everything is going smooth, there are conditions possible that can lead to huge spikes in space usage. We had actual case of build that was setup to be re-run after failure and one day it was broken in a way that made it fail almost immediately. In no time we had +15000 builds in history. That actually didn't lead to any problems because very little information were produced and stored for each run. But in other cases such builds can quickly use all your database quota or artifacts and build logs disk space.

Our production server runs under 32bit Windows XP and has quite an average hardware: 3.2GHz Pendium D, 3.25 gigs of RAM with swap disabled. We have separate (big) hard drive for TeamCity data directory (holding build logs and artifacts).

We are using MySQL 5.0.4x database as backing storage that is running on the same box. (Configuration: inno_db/file-per-table, caches and buffers increased, uses about 600M of RAM). It currently takes slightly more than 2 gigs on the system partition.

About 50% of space is used by passed/failed test data (our typical build has 3 000 - 11 000 unit tests). Next top consumers are Inspection builds, compiler output and VCS changes information - taking about 7% each. Other features use about 1% per data domain, with build history and statistics data totaling to 2%.

These numbers can be used to planning and predicting database space usage. However judging from user feedback disk space problems (due to amount of artifacts stored) are encountered much earlier than any problems with database.

Although we have integration builds for all supported databases (Oracle 10+, PosgreSQL 8+, MS SQL 2005+) run over each commit and development instances using these databases we'd like to obtain more information about different real-world setups.

Feedback is very appreciated!

1 comment:

Michael said...

Just curious, how big is the separate hard drive on your main machine?