SlideShare une entreprise Scribd logo
1  sur  5
Télécharger pour lire hors ligne
Multi-Threaded Server (also known as Shared Server)                               Networking Tips




What is Multi-Threaded Server?

By default, and ‘out of the box’, when a User connects to an Oracle Instance, Oracle
grants that User exclusive use of various resources. In particular, each User is given a
Server Process, and some private memory (called the Program Global Area, or PGA) for
storing session-specific variables, cursor states and in which to perform sorts. Which is all
well and good –except that those resources have to be reserved (and maintained) for that
User even if he or she steps away from their PC for a lunch break, shopping trip or lengthy
visit to the local pub.

That sort of configuration (remember, it’s the default one) is known as ‘Dedicated Server
Configuration), because each User gets allocated a single Server Process, which is
therefore dedicated to their exclusive use. It’s easy to configure, and works well. But it
can be a waste of resources, as I’ve already suggested, since the existence of a Server
Process isn’t dependent on whether the User is actually making good use of it: it exists
until the User remembers to disconnect, regardless of how much work the User makes it do.

There’s also a problem when your database has to service the requests made by several
dozen Users. Remember that each and every one of them gets their own set of dedicated
servers. When you are up to around 50 – 100 Users, that many sets of dedicated resources
starts becoming a significant management burden for the machine that they are all
connected to. When boxes feel the pressure, performance starts tailing off.

Enter, therefore, the ‘Multi-threaded Server Configuration (MTS for short)’. This is a way
of configuring Oracle in such a way as to ensure that Users are not given their own
dedicated Server Processes, but instead have access to a pool of pre-spawned Server
Processes. When Users submit queries, any one of the pool of processes which happens to
be idle can process the request. When done with the request, that process becomes idle
again –and is thus free to work on the requests submitted by any other User.

In this way, a User that pops out for a coffee break isn’t tying up an entire Server Process.
Other Users who are still hard at work can continue to make effective use of the same pool
of processes. Therefore, in MTS configurations, there is a more efficient utilisation of
scarce resources. In turn, that implies that as we acquire more and more Users, we don’t
need to keep spawning more and more processes to deal with them. And if we don’t need
so many processes, we can accommodate far more User connections than in Dedicated
Server configuration, without the machine itself keeling over under the strain.

I should at this point perhaps point out that there is nothing very multi-threaded about
Multi-Threaded Server! MTS works with a pool of Server Processes (and some other
processes called Dispatchers, which we’ll get to shortly). These are all separate processes
on a Unix box, not threads within a process (which a strict interpretation of the words
‘multi-threaded’ might imply). Of course, on NT, MTS is truly multi-threaded –but then, so
is Dedicated Server!! That’s just the way Oracle is implemented as a whole on NT.

Copyright © Howard Rogers 2002                        16/03/2002                        Page 1 of 5
Multi-Threaded Server (also known as Shared Server)                              Networking Tips




Because of this sort of potential confusion surrounding the precise nature of the words
‘Multi-Threaded’, Oracle has finally done the decent thing in version 9i, and renamed the
MTS Configuration as “Shared Server Configuration”, which has the distinct advantage of
accurately describing its essential feature –that Users make shared use of a pool of Server
Processes, and thus each process is shared amongst many Users. Apart from the name
change, though, there is no difference between what used to be called MTS and what is
now called Shared Server. For the rest of this article, I’ll use the term ‘Shared Server’,
since that is what is most current (and what I find a more accurate and descriptive) name.

Now to the architectural features of Shared Server Configuration.

When you start up an Instance that has been configured for Shared Server connections, a
number of processes called Dispatchers are spawned, and register themselves with the
Listener. There are as many Dispatchers spawned as you specify with the DISPATCHERS
parameter in the init.ora. The job of the dispatcher is to take User requests and place
them in a Job Queue for processing by a Server Process. It is also responsible for returning
the results of a query back to the User when they are available. It is to a Dispatcher that a
User is actually connected in Shared Server configuration: the Listener, upon receiving a
connection request, chooses the Dispatcher with the least number of existing connections,
and directs the User to connect to that Dispatcher.

Also spawned when you start the Instance is whatever number of Server Processes you have
configured with the SHARED_SERVERS parameters. Each of these processes sits idle until a
User job is placed on the Job Queue by the Dispatchers –at which point, one of them will
retrieve the job details from the queue, and execute them (for example, retrieving the
rows from the EMP table in response to a select * from EMP request). The results for the
job are placed in a Response Queue.

There is one Response Queue per Dispatcher process (there is only a single Job Queue,
however), and since the Server Process knows which Dispatcher placed the original job on
the Job Queue, it ensures it places the results on the appropriate Response Queue.

Once the results of a query have been placed on the correct Response Queue, the Server
Process returns to the idle state, until the next request is posted onto the Job Queue. We
can summarise all those words with this simple picture:




Copyright © Howard Rogers 2002                        16/03/2002                       Page 2 of 5
Multi-Threaded Server (also known as Shared Server)                                         Networking Tips




                            The SGA                                     7
                                                          Response Q1           DISP1

                                                                            2           1
                                          6               Response Q2           DISP2
                        5
                                                          Job Queue




              4
                                                      3
         S          S            S
             Server Processes




    1.        User submits a query (or a piece of DML/DDL etc)
    2.        Dispatcher to which the User is connected places job on Job Queue
    3.        One of the Server Processes retrieves job from Job Queue …
    4.        …and starts to process it in the usual way (blocks read into Buffer Cache etc)
    5.        When all required blocks retrieved, the results are fetched by the Server
              Process…
    6.        …back to the Response Queue for the originating Dispatcher…
    7.        …which returns them back to the correct User.

That’s all there is to Shared Server configuration, really. Everything else continues to
work in exactly the same way as in Dedicated Server configuration… you still have row-
level locking, you still can’t see data that was committed after the time your query was
submitted, redo and rollback is generated by transactions as normal, and we still cache
execution plans in the Library Cache of the Shared Pool. And so on.

There is just one other thing to mention, however. In Dedicated Server configuration, as
was mentioned at the start of this article, each Server Process comes attached to a piece
of memory that’s for a User’s private use, known as the PGA. The PGA is actually made up
of three separate memory areas, known as the Stack Space, the User Session Data area and
the Cursor State area. The Stack Space contains local variables for the Server Process; the
User Session Data contains resource usage and security information, as well as the area
where in-memory sorts are performed; and the Cursor Area contains run-time memory
values for the SQL statements issued by the User, such as the number of rows returned.
Clearly, two of these areas are intimately associated with the User (the Cursor Area and
User Session Data areas), whereas the Stack Space is more related to the Server Process
itself.

In Shared Server configuration, we can’t leave the two areas related to a specific User
attached to the PGA (and hence attached to the Server Process), because the Server
Process is shareable amongst many Users. Therefore, we have to remove those two areas

Copyright © Howard Rogers 2002                                 16/03/2002                         Page 3 of 5
Multi-Threaded Server (also known as Shared Server)                              Networking Tips


from the PGA (which, incidentally, are together known as the UGA or User Global Area),
and store them somewhere else… and by default, that “somewhere else” is in the SGA.
Now that’s a problem in the making: to make room for the UGA memory, we have to steal
it from other components of the SGA –and that means that we have to steal it from the
Shared Pool (which is the only real candidate), and in particular, the Library Cache of the
Shared Pool.

When the Library Cache gets too small (which is what is likely to happen in this scenario),
parsed SQL statements start getting aged out of memory rapidly. And that, in turn, means
that we will end up performing far too many “hard parses” as Users continue to submit SQL
requests. This can seriously compromise performance for an OLTP environment.

Obviously, one possible cure for this would be to increase the setting of
SHARED_POOL_SIZE in the init.ora –and if you’re going to switch to Shared Server mode,
then this is the bare minimum adjustment to existing parameters that you should
undertake. But, in fact, this is a poor approach to the problem, because UGAs have a
habit of growing and shrinking over time, depending on what actual SQL the User is
submitting. When you have growth and shrinkage of memory in the Shared Pool, you tend
to end up with Shared Pool fragmentation, which can prevent new SQL or PL/SQL requests
from being processed at all.

Fortunately, there’s a much better cure. If a Large Pool has been configured at startup,
then Oracle will place the UGA in there, rather than in the Shared Pool. That means we
experience no Shared Pool shrinkage, and no Library Cache fragmentation. Therefore, the
rule is simple: if you are going to configure for Shared Server, make sure you configure a
Large Pool in your init.ora. You do so by setting LARGE_POOL_SIZE to some value of
bytes. Making sure you’ve got the right size for your Large Pool is a tuning exercise in
itself, but starting with at least 1000000 (that is, 1 Mb) is probably wise.

Shared Server is not difficult to configure these days. A couple of init.ora parameters
usually suffices. You might also need to modify your tnsnames.ora to ensure that some
connections can still be made using a dedicated Server Process (DBAs really don’t want to
be waiting for large maintenance jobs to complete because they’re having to use a shared
Server Process!) –and yes, it’s perfectly possible to allow dedicated connections in a
primarily Shared Server configuration.

There’s plenty of anecdotal evidence that performance takes a nose-dive after making the
switch from Dedicated Server, but that’s almost certainly because configuration wasn’t
done properly, or was implemented for inappropriate reasons. Done for the right reasons
and in the right way, Shared Server should actually improve performance. Unix systems
will benefit when the number of individual processes in Dedicated configuration is getting
too high –the exact point where that happens is, of course, dependent on the capabilities
of your hardware, but as a very rough rule of thumb, anytime you have more than about 50
to 100 concurrent connections, it might be worth investigating the benefits of making the
switch. Under that limit, Dedicated configuration will perform better.

Copyright © Howard Rogers 2002                        16/03/2002                       Page 4 of 5
Multi-Threaded Server (also known as Shared Server)                               Networking Tips


Similarly, if you’ve so much RAM to spare that the fact a significant amount of it is tied up
in dedicated PGAs which might be sitting there unused whilst Users don’t do much real
work, then feel free to stick with Dedicated configuration. But if RAM is starting to get
tight, Shared Server is an ideal way of making more efficient use of what you’ve got. Until
RAM starts being squeezed, though, Dedicated configuration will perform better than
Shared Server.

For NT systems, however, the advice is not quite so clear-cut. Certainly, the concern
about excessive RAM consumption by idle Users still applies. Likewise, excessive numbers
of concurrent Users means that lots of additional threads within the Oracle process have to
be spawned, and you can’t just spawn additional threads forever, without the Server
beginning to feel the strain.

On the other hand, because Oracle on NT is genuinely multi-threaded, even a “Dedicated”
server isn’t really that dedicated: the O/S is forever switching between threads within the
Oracle process, and each User is therefore already being ‘time-sliced’ on the processor.
That can be a problem with Dedicated Server, because it’s a synchronous architecture (you
submit a query, the server responds). Having to park the state of that synchronous
connection every time the O/S switches to service another thread is a fairly substantial
CPU overhead. Shared Server is, however, inherently asynchronous (you submit a query,
someone eventually picks it up off a queue). So the thread-switch overhead is reduced,
and accordingly you may well find that performance on NT improves with Shared Server
configuration, even when you don’t have a large number of Users or a RAM problem.

On NT, then, you need to benchmark carefully when User numbers and RAM availability
aren’t the issue, and make sure that making the switch doesn’t compromise performance.
In theory, it should help, not hinder –but sometimes, theory and practice aren’t always in
agreement!




Copyright © Howard Rogers 2002                        16/03/2002                        Page 5 of 5

Contenu connexe

Plus de oracle documents (20)

Varraysandnestedtables
VarraysandnestedtablesVarraysandnestedtables
Varraysandnestedtables
 
Usertracing
UsertracingUsertracing
Usertracing
 
Userpasswrd
UserpasswrdUserpasswrd
Userpasswrd
 
Userlimit
UserlimitUserlimit
Userlimit
 
Undo internalspresentation
Undo internalspresentationUndo internalspresentation
Undo internalspresentation
 
Undo internals paper
Undo internals paperUndo internals paper
Undo internals paper
 
Tablespacelmt
TablespacelmtTablespacelmt
Tablespacelmt
 
Tablerename
TablerenameTablerename
Tablerename
 
Sql scripting sorcerypresentation
Sql scripting sorcerypresentationSql scripting sorcerypresentation
Sql scripting sorcerypresentation
 
Sql scripting sorcerypaper
Sql scripting sorcerypaperSql scripting sorcerypaper
Sql scripting sorcerypaper
 
Sql for dbaspresentation
Sql for dbaspresentationSql for dbaspresentation
Sql for dbaspresentation
 
Sequencereset
SequenceresetSequencereset
Sequencereset
 
Rollbacksizes
RollbacksizesRollbacksizes
Rollbacksizes
 
Rollbackshrinks
RollbackshrinksRollbackshrinks
Rollbackshrinks
 
Rollbacklmt
RollbacklmtRollbacklmt
Rollbacklmt
 
Rollbackblocking
RollbackblockingRollbackblocking
Rollbackblocking
 
Rollback1555s
Rollback1555sRollback1555s
Rollback1555s
 
Redosize
RedosizeRedosize
Redosize
 
Real liferecoverypresentation
Real liferecoverypresentationReal liferecoverypresentation
Real liferecoverypresentation
 
Real liferecoverypaper
Real liferecoverypaperReal liferecoverypaper
Real liferecoverypaper
 

Whatismts

  • 1. Multi-Threaded Server (also known as Shared Server) Networking Tips What is Multi-Threaded Server? By default, and ‘out of the box’, when a User connects to an Oracle Instance, Oracle grants that User exclusive use of various resources. In particular, each User is given a Server Process, and some private memory (called the Program Global Area, or PGA) for storing session-specific variables, cursor states and in which to perform sorts. Which is all well and good –except that those resources have to be reserved (and maintained) for that User even if he or she steps away from their PC for a lunch break, shopping trip or lengthy visit to the local pub. That sort of configuration (remember, it’s the default one) is known as ‘Dedicated Server Configuration), because each User gets allocated a single Server Process, which is therefore dedicated to their exclusive use. It’s easy to configure, and works well. But it can be a waste of resources, as I’ve already suggested, since the existence of a Server Process isn’t dependent on whether the User is actually making good use of it: it exists until the User remembers to disconnect, regardless of how much work the User makes it do. There’s also a problem when your database has to service the requests made by several dozen Users. Remember that each and every one of them gets their own set of dedicated servers. When you are up to around 50 – 100 Users, that many sets of dedicated resources starts becoming a significant management burden for the machine that they are all connected to. When boxes feel the pressure, performance starts tailing off. Enter, therefore, the ‘Multi-threaded Server Configuration (MTS for short)’. This is a way of configuring Oracle in such a way as to ensure that Users are not given their own dedicated Server Processes, but instead have access to a pool of pre-spawned Server Processes. When Users submit queries, any one of the pool of processes which happens to be idle can process the request. When done with the request, that process becomes idle again –and is thus free to work on the requests submitted by any other User. In this way, a User that pops out for a coffee break isn’t tying up an entire Server Process. Other Users who are still hard at work can continue to make effective use of the same pool of processes. Therefore, in MTS configurations, there is a more efficient utilisation of scarce resources. In turn, that implies that as we acquire more and more Users, we don’t need to keep spawning more and more processes to deal with them. And if we don’t need so many processes, we can accommodate far more User connections than in Dedicated Server configuration, without the machine itself keeling over under the strain. I should at this point perhaps point out that there is nothing very multi-threaded about Multi-Threaded Server! MTS works with a pool of Server Processes (and some other processes called Dispatchers, which we’ll get to shortly). These are all separate processes on a Unix box, not threads within a process (which a strict interpretation of the words ‘multi-threaded’ might imply). Of course, on NT, MTS is truly multi-threaded –but then, so is Dedicated Server!! That’s just the way Oracle is implemented as a whole on NT. Copyright © Howard Rogers 2002 16/03/2002 Page 1 of 5
  • 2. Multi-Threaded Server (also known as Shared Server) Networking Tips Because of this sort of potential confusion surrounding the precise nature of the words ‘Multi-Threaded’, Oracle has finally done the decent thing in version 9i, and renamed the MTS Configuration as “Shared Server Configuration”, which has the distinct advantage of accurately describing its essential feature –that Users make shared use of a pool of Server Processes, and thus each process is shared amongst many Users. Apart from the name change, though, there is no difference between what used to be called MTS and what is now called Shared Server. For the rest of this article, I’ll use the term ‘Shared Server’, since that is what is most current (and what I find a more accurate and descriptive) name. Now to the architectural features of Shared Server Configuration. When you start up an Instance that has been configured for Shared Server connections, a number of processes called Dispatchers are spawned, and register themselves with the Listener. There are as many Dispatchers spawned as you specify with the DISPATCHERS parameter in the init.ora. The job of the dispatcher is to take User requests and place them in a Job Queue for processing by a Server Process. It is also responsible for returning the results of a query back to the User when they are available. It is to a Dispatcher that a User is actually connected in Shared Server configuration: the Listener, upon receiving a connection request, chooses the Dispatcher with the least number of existing connections, and directs the User to connect to that Dispatcher. Also spawned when you start the Instance is whatever number of Server Processes you have configured with the SHARED_SERVERS parameters. Each of these processes sits idle until a User job is placed on the Job Queue by the Dispatchers –at which point, one of them will retrieve the job details from the queue, and execute them (for example, retrieving the rows from the EMP table in response to a select * from EMP request). The results for the job are placed in a Response Queue. There is one Response Queue per Dispatcher process (there is only a single Job Queue, however), and since the Server Process knows which Dispatcher placed the original job on the Job Queue, it ensures it places the results on the appropriate Response Queue. Once the results of a query have been placed on the correct Response Queue, the Server Process returns to the idle state, until the next request is posted onto the Job Queue. We can summarise all those words with this simple picture: Copyright © Howard Rogers 2002 16/03/2002 Page 2 of 5
  • 3. Multi-Threaded Server (also known as Shared Server) Networking Tips The SGA 7 Response Q1 DISP1 2 1 6 Response Q2 DISP2 5 Job Queue 4 3 S S S Server Processes 1. User submits a query (or a piece of DML/DDL etc) 2. Dispatcher to which the User is connected places job on Job Queue 3. One of the Server Processes retrieves job from Job Queue … 4. …and starts to process it in the usual way (blocks read into Buffer Cache etc) 5. When all required blocks retrieved, the results are fetched by the Server Process… 6. …back to the Response Queue for the originating Dispatcher… 7. …which returns them back to the correct User. That’s all there is to Shared Server configuration, really. Everything else continues to work in exactly the same way as in Dedicated Server configuration… you still have row- level locking, you still can’t see data that was committed after the time your query was submitted, redo and rollback is generated by transactions as normal, and we still cache execution plans in the Library Cache of the Shared Pool. And so on. There is just one other thing to mention, however. In Dedicated Server configuration, as was mentioned at the start of this article, each Server Process comes attached to a piece of memory that’s for a User’s private use, known as the PGA. The PGA is actually made up of three separate memory areas, known as the Stack Space, the User Session Data area and the Cursor State area. The Stack Space contains local variables for the Server Process; the User Session Data contains resource usage and security information, as well as the area where in-memory sorts are performed; and the Cursor Area contains run-time memory values for the SQL statements issued by the User, such as the number of rows returned. Clearly, two of these areas are intimately associated with the User (the Cursor Area and User Session Data areas), whereas the Stack Space is more related to the Server Process itself. In Shared Server configuration, we can’t leave the two areas related to a specific User attached to the PGA (and hence attached to the Server Process), because the Server Process is shareable amongst many Users. Therefore, we have to remove those two areas Copyright © Howard Rogers 2002 16/03/2002 Page 3 of 5
  • 4. Multi-Threaded Server (also known as Shared Server) Networking Tips from the PGA (which, incidentally, are together known as the UGA or User Global Area), and store them somewhere else… and by default, that “somewhere else” is in the SGA. Now that’s a problem in the making: to make room for the UGA memory, we have to steal it from other components of the SGA –and that means that we have to steal it from the Shared Pool (which is the only real candidate), and in particular, the Library Cache of the Shared Pool. When the Library Cache gets too small (which is what is likely to happen in this scenario), parsed SQL statements start getting aged out of memory rapidly. And that, in turn, means that we will end up performing far too many “hard parses” as Users continue to submit SQL requests. This can seriously compromise performance for an OLTP environment. Obviously, one possible cure for this would be to increase the setting of SHARED_POOL_SIZE in the init.ora –and if you’re going to switch to Shared Server mode, then this is the bare minimum adjustment to existing parameters that you should undertake. But, in fact, this is a poor approach to the problem, because UGAs have a habit of growing and shrinking over time, depending on what actual SQL the User is submitting. When you have growth and shrinkage of memory in the Shared Pool, you tend to end up with Shared Pool fragmentation, which can prevent new SQL or PL/SQL requests from being processed at all. Fortunately, there’s a much better cure. If a Large Pool has been configured at startup, then Oracle will place the UGA in there, rather than in the Shared Pool. That means we experience no Shared Pool shrinkage, and no Library Cache fragmentation. Therefore, the rule is simple: if you are going to configure for Shared Server, make sure you configure a Large Pool in your init.ora. You do so by setting LARGE_POOL_SIZE to some value of bytes. Making sure you’ve got the right size for your Large Pool is a tuning exercise in itself, but starting with at least 1000000 (that is, 1 Mb) is probably wise. Shared Server is not difficult to configure these days. A couple of init.ora parameters usually suffices. You might also need to modify your tnsnames.ora to ensure that some connections can still be made using a dedicated Server Process (DBAs really don’t want to be waiting for large maintenance jobs to complete because they’re having to use a shared Server Process!) –and yes, it’s perfectly possible to allow dedicated connections in a primarily Shared Server configuration. There’s plenty of anecdotal evidence that performance takes a nose-dive after making the switch from Dedicated Server, but that’s almost certainly because configuration wasn’t done properly, or was implemented for inappropriate reasons. Done for the right reasons and in the right way, Shared Server should actually improve performance. Unix systems will benefit when the number of individual processes in Dedicated configuration is getting too high –the exact point where that happens is, of course, dependent on the capabilities of your hardware, but as a very rough rule of thumb, anytime you have more than about 50 to 100 concurrent connections, it might be worth investigating the benefits of making the switch. Under that limit, Dedicated configuration will perform better. Copyright © Howard Rogers 2002 16/03/2002 Page 4 of 5
  • 5. Multi-Threaded Server (also known as Shared Server) Networking Tips Similarly, if you’ve so much RAM to spare that the fact a significant amount of it is tied up in dedicated PGAs which might be sitting there unused whilst Users don’t do much real work, then feel free to stick with Dedicated configuration. But if RAM is starting to get tight, Shared Server is an ideal way of making more efficient use of what you’ve got. Until RAM starts being squeezed, though, Dedicated configuration will perform better than Shared Server. For NT systems, however, the advice is not quite so clear-cut. Certainly, the concern about excessive RAM consumption by idle Users still applies. Likewise, excessive numbers of concurrent Users means that lots of additional threads within the Oracle process have to be spawned, and you can’t just spawn additional threads forever, without the Server beginning to feel the strain. On the other hand, because Oracle on NT is genuinely multi-threaded, even a “Dedicated” server isn’t really that dedicated: the O/S is forever switching between threads within the Oracle process, and each User is therefore already being ‘time-sliced’ on the processor. That can be a problem with Dedicated Server, because it’s a synchronous architecture (you submit a query, the server responds). Having to park the state of that synchronous connection every time the O/S switches to service another thread is a fairly substantial CPU overhead. Shared Server is, however, inherently asynchronous (you submit a query, someone eventually picks it up off a queue). So the thread-switch overhead is reduced, and accordingly you may well find that performance on NT improves with Shared Server configuration, even when you don’t have a large number of Users or a RAM problem. On NT, then, you need to benchmark carefully when User numbers and RAM availability aren’t the issue, and make sure that making the switch doesn’t compromise performance. In theory, it should help, not hinder –but sometimes, theory and practice aren’t always in agreement! Copyright © Howard Rogers 2002 16/03/2002 Page 5 of 5