This article is about running Sybase on a sophisticated UNIX. It discusses sizing Sybase’s max engines parameter, the effect of resource management tools & leveraging UNIX & Consolidation. Also note that this is not a Sun Blueprint, its meant to show you that you can, not how to.


I am often asked, “Given that Sybase recommend one engine/CPU, how can you consolidate Sybase onto a large computer?”

The first thing to say is that it is my view that consolidation is about architecture and co-operating systems, not merely an excuse to inappropriately pimp big-iron; an ideal consolidation host may be of any size. However, it is a fact that higher levels of utilisation are more likely if the available system capability is organised in large systems rather than the equivalently capable number of smaller systems.

Sybase is a database and most databases are good citizens within the operating system. This is true of Sybase which is a good Solaris citizen and is hence easy to consolidate.

It is absolutely true that Sybase have traditionally recommended that a Sybase engine (implemented as a UNIX process) should be mapped onto a physical CPU, although a common alternative rule is that the number of engines configured (max engines) should be the number of CPUs, leaving one spare. Either

E=#CPU or E=#CPU – 1

where E is number of Engines, #CPU is the number of CPUs.

The reason these rules are important is that “A well tuned database server will be CPU bound”.

However the equations above are only performance tuning rules, and what’s more, they’re out of date. One of the key reasons that they’re out of date is that CPUs are now much faster and more capable than when the rules were first developed. Due to the increased capability of modern CPUs, potentially fractions of CPU are required to support the applications requirement, but only integer numbers of engines can be deployed. The poverty of these rules is compounded by the fact that because they do not take into account the capability of the CPUs, and they are of little use in deciding the capacity requirements of replacement systems, or comparing between system architectures.

The tuning rules above have been useful where Sybase is the only piece of work the system is running and only one instance of the database server is running. In these cases, the designer needs to determine how many cycles/MIPS/specINTs etc. the server needs to deliver the expected/required performance. BTW, I shall define the delivered power of CPU as ‘omph’ (O) for the rest of this article. With Sybase, if it requires more than one CPU’s worth of omph, then multiple engines will be required.


where OReq is the amount of CPU required and Ocpu is the capability of the CPU. If we are looking to use the Solaris Resource Manager, then we need to translate this rule into SRM concepts and talk about shares. The rule above has stated OReq, and Ocpu \* #CPU defines the capability of the system (or pool). In the world of managed resource, the number of engines is,


where S is the number of SRM shares either required (SReq) or defined (Stot) & roundup() takes the arguments of expression, sig.figs (set to zero (0) not O for omph). This assumes that the system proposed is more than powerful enough. i.e.

i.e. SReq/Stot < 1

Or in other words the number of CPUs in a system must be greater than (or equal to) the number of engines. Where this is not the case, then the amount of omph delivered will be equal to the number of CPUs, unless constrained by SRM.

We now have three cases, where SRM is used to constrain system capability available to the database server, where the Engine/#CPU is used to constrain system capability to the database server or where the database server consumed all available system resource.

  1. where SRM is used to constrain system capability available to the database server,
  2. where the Engine/#CPU is used to constrain system capability to the database server or
  3. where the database server consumed all available system resource.

These might be expressed in our equation language as

O=( SReq/Stot\*#CPU ) || E/#CPU\*#CPU || 1\*#CPU

where O is the proportion of the system consumed and the conditions are,

if SRM on || E < #CPU, SRM off || E>#CPU, SRM off.

These conditions are important; they show the massive difference in configuration rules depending on how actively the systems are resource managed. A single or multiple Sybase instance can then be placed under resource management using the S Ratio {SReq/Stot} to define the resources allocated to the Sybase instance. This can be enforced by user namespace design (S8) or projects (S9/S10). The permitted resources can be enforced using Solaris Resource Manager, processor sets with or without domain dynamic reconfiguration. It should be noted that the max engines parameter can be used to enforce rule two in a consolidation scenario; more engines than CPUs can be configured. In one assignment undertaken, the customer required a ratio of two engines/CPU and rationed between server instances by varying the number of engines.

I have written previously here… in my blog about designing the database instance schema as an applications map and why Consolidating Sybase instances onto fewer Solaris instances makes sense. These multiple server instances can be managed using SRM or the Solaris Scheduler. (I propose to research and write something more comprehensive about scheduler classes and zones.) I recommend that the operating system should be the resource rationing tool. Only the OS knows about that hardware and the system capability and unlike Sybase, Solaris with its project construct can approximate an application, and can therefore ration in favour, or against such objects. A Sybase server will not discriminate between applications, nor between queries.

In a world of virtual servers, and managed resource, another factor is that the number of CPUs in a domain is no longer static within an operating system instance, or any resource management entity. Our ability to move resources from & into a resource management entity permit us to change the constraints that enable the rule above. An example is that an ASE instance can be started with eight engines & eight CPUs within its processor set and that the processor set is shrunk to four CPUs. The ASE is originally consuming eight CPUs worth of omph, and after the configuration change, it only consumes four. These options permit configuration rules where the maximum number of engines for a single instance of Sybase may be either less than or equal to the Max number of CPUs planned over time. Also where multiple data servers are running on a single system the total number of engines is likely to exceed the number of CPUs. (One leading Sybase user in the City aims at two engines/CPU, while another moves system resources between domains on an intra-day basis.) These are both key consolidation techniques. The number of engines for each ASE Server instance should be set to the maximum required and the resources underpinning it can be changed using dynamic domains, Solaris processor management utilities or Solaris Resource Manager. Consolidating database servers permits the resource management functionality of the OS (or hardware) to allocate CPU resource to the server. These can be dynamically changed. This means configuring sufficient engines to take advantage of the maximum available CPUs. i.e. if at a short time of day Sybase is required to use 12 CPUs, and for the rest of the day only four, then ‘max engines’ needs to be set to 12 and constrained at the times of day when only four are required.

In summary, in a world where consolidation & virtualisation, exist (or are coming), the value of max engines can no longer be assumed to be the based on the simple rules of the past.


Originally published on my sun blog, which changed name when Oracle took them over, and copied to my personal blog here in Dec 2014.

Making Sybase Scream
Tagged on:                     

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: