Monday, March 12, 2012

Hardware scalability

Dear all,
My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
2000 SP3a installed (around 90 databases in which around 10 are used daily
not intensively - total data size : 5.5Go - in which 10% is for daily used
databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
replication is implemented. The server is setup as editor only (distributor
and subscribers are on other powerful servers). All is working fine. Network
load is low (10Mo pikes for I/O). We have got around 20 frequent users on
this server.
We are planning to implement two new databases that should represents a
significant increase in workload (frequent heavy batches processes - 1Go of
data). Moreover, we need to integrate them in the transactional replication
process.
We are wondering if our hardware will be sufficient enough to support this
added workload. Yet, I've not found any rule to deduce hardware requirements
from databases size and use.
Could you give me clues for scaling my server, knowing that I have no
similar test server to make benchmarking? Maybe have you similar systems?
Thanks a lot,
Eric.
Eric,
You're correct there's not much to go on here. However, I would point out
one thing. Sounds like the server infrequently services reasonably short
requests. That indicates that the single processor is probably keeping up
with the requests because its generally only getting one request at a time.
So the responsiveness to the users is acceptable. By integrating heavy
batch processes into the mix, there's a strong likelihood that SQL won't
have an internal scheduler free when a user request is initiated. A mix of
heavy batch or large query, with OLTP on too few processors usually results
in end users waiting on screens.
"itparis" <itparis@.discussions.microsoft.com> wrote in message
news:DF71D8B2-6184-4631-82F3-6FE96FA81514@.microsoft.com...
> Dear all,
> My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
> 2000 SP3a installed (around 90 databases in which around 10 are used daily
> not intensively - total data size : 5.5Go - in which 10% is for daily used
> databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
> replication is implemented. The server is setup as editor only
> (distributor
> and subscribers are on other powerful servers). All is working fine.
> Network
> load is low (10Mo pikes for I/O). We have got around 20 frequent users on
> this server.
> We are planning to implement two new databases that should represents a
> significant increase in workload (frequent heavy batches processes - 1Go
> of
> data). Moreover, we need to integrate them in the transactional
> replication
> process.
> We are wondering if our hardware will be sufficient enough to support this
> added workload. Yet, I've not found any rule to deduce hardware
> requirements
> from databases size and use.
> Could you give me clues for scaling my server, knowing that I have no
> similar test server to make benchmarking? Maybe have you similar systems?
> Thanks a lot,
> Eric.
|||"Danny" <someone@.nowhere.com> wrote in message
news:KpdWe.6396$XO6.2458@.trnddc03...
> Eric,
> You're correct there's not much to go on here. However, I would point out
> one thing. Sounds like the server infrequently services reasonably short
> requests. That indicates that the single processor is probably keeping up
> with the requests because its generally only getting one request at a
> time. So the responsiveness to the users is acceptable. By integrating
> heavy batch processes into the mix, there's a strong likelihood that SQL
> won't have an internal scheduler free when a user request is initiated. A
> mix of heavy batch or large query, with OLTP on too few processors usually
> results in end users waiting on screens.
>
I have to agree with Danny on this one. The right answer (as always with
database) is, It depends.
Do you want to optimize your server for general usage, or do you want to
optimize your server to handle the spikes in performance.
For general usage, I would suggest that you add more RAM to the box. 4GB
total and give 2GB to SQL Server. You will still have spikes, most likely
due to the processor running heavy batches, but should otherwise be in
decent shape. On the replication side of the house, depending on the size
of your transactions which are being replicated and how often replication
occurs (immediate, every 15 minutes etc.). You may want to upgrade your NIC
if possible to 100MB or even 1GB.
If you want to optimize to handle the spikes, then more RAM, 2 procs with
higher speeds and larger L2 caches should help out.
You can read up on a lot of the perf counters to watch for at
www.sql-server-performance.com Take a look at McGeHee's article... It's a
great first step...
http://www.sql-server-performance.co...ance_audit.asp
Rick Sawtell
MCT, MCSD, MCDBA
|||The "standard" config for a dedicated and heavy-duty SQLServer
hardware is 2-processors, all the RAM you can get, at least separate
physical drive for log files, generally RAID-5 for the main DBs.
These days with 200gb drives going for a hundred bux you don't need
RAID just to get your storage size up, but it still helps isolate
physical storage concerns. Network-attached storage is even better,
if you have gigahertz networks. And oh yes, Windows2003, makes
hyperthreading work and has better general threading and COM 1.5+.
Click up Dell and configure such a server, betcha can get a couple of
3ghz processors starting around, um, ... $10k? $15k? Depends. Once
you reach blade-scale, adding another processor is cheap.
Let's say a proper current box like this would be around 5x faster
than a single PIII with a single physical disk drive.
J.
On Thu, 15 Sep 2005 02:00:07 -0700, "itparis"
<itparis@.discussions.microsoft.com> wrote:

>Dear all,
>My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
>2000 SP3a installed (around 90 databases in which around 10 are used daily
>not intensively - total data size : 5.5Go - in which 10% is for daily used
>databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
>replication is implemented. The server is setup as editor only (distributor
>and subscribers are on other powerful servers). All is working fine. Network
>load is low (10Mo pikes for I/O). We have got around 20 frequent users on
>this server.
>We are planning to implement two new databases that should represents a
>significant increase in workload (frequent heavy batches processes - 1Go of
>data). Moreover, we need to integrate them in the transactional replication
>process.
>We are wondering if our hardware will be sufficient enough to support this
>added workload. Yet, I've not found any rule to deduce hardware requirements
>from databases size and use.
>Could you give me clues for scaling my server, knowing that I have no
>similar test server to make benchmarking? Maybe have you similar systems?
>Thanks a lot,
>Eric.

No comments:

Post a Comment