Hello,
I have two symmetrical databases on two SQL servers - one remote, the
other - local. The data in the remote server changes every minute. The data
in local SQL server is used for creating some reports and general testing.
Sometimes data from the local base can be irreversibly deleted.
I want to have some sort of procedure, that would do one-direction
synchronisation of the two databases. The data in the remote database should
overwrite the data in the local database, so that after the update the local
database is exact copy of the remote database. Of course, this kind of
result could be achieved by detaching the remote base and bringing over the
database files, or doing full DTS. However, detaching and transfering is not
a viable solution since the remote base is actively used and the size of
data files are considerable; And, doing full DTS would take years. There is
some need to do some comparison of record hashes etc., so that the size of
the data that have to be transfered is reduced to minimum.
Thanks,
Pavils
The first thing that comes to mind is BACKUP and RESTORE. After you restore
a database backup you will have an exact copy of the original (as of the
date and time the backup completed).
You mention that the size of the data files are "considerable." I am not
sure what that means, or if that is a limitation for you in your
environment.
You might be able to look into replication.
Another option: if you have datetime columns (such as DateUpdated or
DateInserted) stored on your tables you could create your own shipping
routines...simply find the "recent" data (whatever that means to you) and
insert it.
You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
EXISTS... you will have to fill in the blanks, but they should be fairly
easy.
Keith
"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:OsrHETCkEHA.3664@.TK2MSFTNGP12.phx.gbl...
> Hello,
> I have two symmetrical databases on two SQL servers - one remote, the
> other - local. The data in the remote server changes every minute. The
data
> in local SQL server is used for creating some reports and general testing.
> Sometimes data from the local base can be irreversibly deleted.
> I want to have some sort of procedure, that would do one-direction
> synchronisation of the two databases. The data in the remote database
should
> overwrite the data in the local database, so that after the update the
local
> database is exact copy of the remote database. Of course, this kind of
> result could be achieved by detaching the remote base and bringing over
the
> database files, or doing full DTS. However, detaching and transfering is
not
> a viable solution since the remote base is actively used and the size of
> data files are considerable; And, doing full DTS would take years. There
is
> some need to do some comparison of record hashes etc., so that the size of
> the data that have to be transfered is reduced to minimum.
> Thanks,
> Pavils
>
|||Hello Keith,
> You mention that the size of the data files are "considerable." I am not
> sure what that means, or if that is a limitation for you in your
> environment.
The idea is that I don't want to transfer every single bit of the whole
database.
> You might be able to look into replication.
Would replication allow to create a fully matching copy wit using as little
data xfer as possible?
> Another option: if you have datetime columns (such as DateUpdated or
> DateInserted) stored on your tables you could create your own shipping
> routines...simply find the "recent" data (whatever that means to you) and
> insert it.
That could be a good solution, but it requires reworking much of the current
application.
> You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
> EXISTS... you will have to fill in the blanks, but they should be fairly
> easy.
Surely, I can do that, however I was hoping for some more automatic (or
wizard-style) solution.
-- Pavils.
|||"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:uUQ1L7CkEHA.3968@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> Hello Keith,
not
> The idea is that I don't want to transfer every single bit of the whole
> database.
understood
>
> Would replication allow to create a fully matching copy wit using as
little
> data xfer as possible?
That is the point of replication
[vbcol=seagreen]
and
> That could be a good solution, but it requires reworking much of the
current
> application.
Probably so (if you don't have a way to identify "new" rows.
[vbcol=seagreen]
fairly
> Surely, I can do that, however I was hoping for some more automatic (or
> wizard-style) solution.
This is not much different than my previous idea...it simply requires you to
type up some Transact-SQL. Hey, no one said being a dba/developer/IT
Professional was easy.
> -- Pavils.
>
Keith
Showing posts with label databases. Show all posts
Showing posts with label databases. Show all posts
Monday, March 26, 2012
Having an updated base every day
Hello,
I have two symmetrical databases on two SQL servers - one remote, the
other - local. The data in the remote server changes every minute. The data
in local SQL server is used for creating some reports and general testing.
Sometimes data from the local base can be irreversibly deleted.
I want to have some sort of procedure, that would do one-direction
synchronisation of the two databases. The data in the remote database should
overwrite the data in the local database, so that after the update the local
database is exact copy of the remote database. Of course, this kind of
result could be achieved by detaching the remote base and bringing over the
database files, or doing full DTS. However, detaching and transfering is not
a viable solution since the remote base is actively used and the size of
data files are considerable; And, doing full DTS would take years. There is
some need to do some comparison of record hashes etc., so that the size of
the data that have to be transfered is reduced to minimum.
Thanks,
PavilsThe first thing that comes to mind is BACKUP and RESTORE. After you restore
a database backup you will have an exact copy of the original (as of the
date and time the backup completed).
You mention that the size of the data files are "considerable." I am not
sure what that means, or if that is a limitation for you in your
environment.
You might be able to look into replication.
Another option: if you have datetime columns (such as DateUpdated or
DateInserted) stored on your tables you could create your own shipping
routines...simply find the "recent" data (whatever that means to you) and
insert it.
You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
EXISTS... you will have to fill in the blanks, but they should be fairly
easy.
--
Keith
"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:OsrHETCkEHA.3664@.TK2MSFTNGP12.phx.gbl...
> Hello,
> I have two symmetrical databases on two SQL servers - one remote, the
> other - local. The data in the remote server changes every minute. The
data
> in local SQL server is used for creating some reports and general testing.
> Sometimes data from the local base can be irreversibly deleted.
> I want to have some sort of procedure, that would do one-direction
> synchronisation of the two databases. The data in the remote database
should
> overwrite the data in the local database, so that after the update the
local
> database is exact copy of the remote database. Of course, this kind of
> result could be achieved by detaching the remote base and bringing over
the
> database files, or doing full DTS. However, detaching and transfering is
not
> a viable solution since the remote base is actively used and the size of
> data files are considerable; And, doing full DTS would take years. There
is
> some need to do some comparison of record hashes etc., so that the size of
> the data that have to be transfered is reduced to minimum.
> Thanks,
> Pavils
>|||Hello Keith,
> You mention that the size of the data files are "considerable." I am not
> sure what that means, or if that is a limitation for you in your
> environment.
The idea is that I don't want to transfer every single bit of the whole
database.
> You might be able to look into replication.
Would replication allow to create a fully matching copy wit using as little
data xfer as possible?
> Another option: if you have datetime columns (such as DateUpdated or
> DateInserted) stored on your tables you could create your own shipping
> routines...simply find the "recent" data (whatever that means to you) and
> insert it.
That could be a good solution, but it requires reworking much of the current
application.
> You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
> EXISTS... you will have to fill in the blanks, but they should be fairly
> easy.
Surely, I can do that, however I was hoping for some more automatic (or
wizard-style) solution.
-- Pavils.|||"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:uUQ1L7CkEHA.3968@.TK2MSFTNGP11.phx.gbl...
> Hello Keith,
> > You mention that the size of the data files are "considerable." I am
not
> > sure what that means, or if that is a limitation for you in your
> > environment.
> The idea is that I don't want to transfer every single bit of the whole
> database.
understood
> > You might be able to look into replication.
> Would replication allow to create a fully matching copy wit using as
little
> data xfer as possible?
That is the point of replication
> > Another option: if you have datetime columns (such as DateUpdated or
> > DateInserted) stored on your tables you could create your own shipping
> > routines...simply find the "recent" data (whatever that means to you)
and
> > insert it.
> That could be a good solution, but it requires reworking much of the
current
> application.
Probably so (if you don't have a way to identify "new" rows.
> > You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
> > EXISTS... you will have to fill in the blanks, but they should be
fairly
> > easy.
> Surely, I can do that, however I was hoping for some more automatic (or
> wizard-style) solution.
This is not much different than my previous idea...it simply requires you to
type up some Transact-SQL. Hey, no one said being a dba/developer/IT
Professional was easy.
> -- Pavils.
>
--
Keith
I have two symmetrical databases on two SQL servers - one remote, the
other - local. The data in the remote server changes every minute. The data
in local SQL server is used for creating some reports and general testing.
Sometimes data from the local base can be irreversibly deleted.
I want to have some sort of procedure, that would do one-direction
synchronisation of the two databases. The data in the remote database should
overwrite the data in the local database, so that after the update the local
database is exact copy of the remote database. Of course, this kind of
result could be achieved by detaching the remote base and bringing over the
database files, or doing full DTS. However, detaching and transfering is not
a viable solution since the remote base is actively used and the size of
data files are considerable; And, doing full DTS would take years. There is
some need to do some comparison of record hashes etc., so that the size of
the data that have to be transfered is reduced to minimum.
Thanks,
PavilsThe first thing that comes to mind is BACKUP and RESTORE. After you restore
a database backup you will have an exact copy of the original (as of the
date and time the backup completed).
You mention that the size of the data files are "considerable." I am not
sure what that means, or if that is a limitation for you in your
environment.
You might be able to look into replication.
Another option: if you have datetime columns (such as DateUpdated or
DateInserted) stored on your tables you could create your own shipping
routines...simply find the "recent" data (whatever that means to you) and
insert it.
You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
EXISTS... you will have to fill in the blanks, but they should be fairly
easy.
--
Keith
"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:OsrHETCkEHA.3664@.TK2MSFTNGP12.phx.gbl...
> Hello,
> I have two symmetrical databases on two SQL servers - one remote, the
> other - local. The data in the remote server changes every minute. The
data
> in local SQL server is used for creating some reports and general testing.
> Sometimes data from the local base can be irreversibly deleted.
> I want to have some sort of procedure, that would do one-direction
> synchronisation of the two databases. The data in the remote database
should
> overwrite the data in the local database, so that after the update the
local
> database is exact copy of the remote database. Of course, this kind of
> result could be achieved by detaching the remote base and bringing over
the
> database files, or doing full DTS. However, detaching and transfering is
not
> a viable solution since the remote base is actively used and the size of
> data files are considerable; And, doing full DTS would take years. There
is
> some need to do some comparison of record hashes etc., so that the size of
> the data that have to be transfered is reduced to minimum.
> Thanks,
> Pavils
>|||Hello Keith,
> You mention that the size of the data files are "considerable." I am not
> sure what that means, or if that is a limitation for you in your
> environment.
The idea is that I don't want to transfer every single bit of the whole
database.
> You might be able to look into replication.
Would replication allow to create a fully matching copy wit using as little
data xfer as possible?
> Another option: if you have datetime columns (such as DateUpdated or
> DateInserted) stored on your tables you could create your own shipping
> routines...simply find the "recent" data (whatever that means to you) and
> insert it.
That could be a good solution, but it requires reworking much of the current
application.
> You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
> EXISTS... you will have to fill in the blanks, but they should be fairly
> easy.
Surely, I can do that, however I was hoping for some more automatic (or
wizard-style) solution.
-- Pavils.|||"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:uUQ1L7CkEHA.3968@.TK2MSFTNGP11.phx.gbl...
> Hello Keith,
> > You mention that the size of the data files are "considerable." I am
not
> > sure what that means, or if that is a limitation for you in your
> > environment.
> The idea is that I don't want to transfer every single bit of the whole
> database.
understood
> > You might be able to look into replication.
> Would replication allow to create a fully matching copy wit using as
little
> data xfer as possible?
That is the point of replication
> > Another option: if you have datetime columns (such as DateUpdated or
> > DateInserted) stored on your tables you could create your own shipping
> > routines...simply find the "recent" data (whatever that means to you)
and
> > insert it.
> That could be a good solution, but it requires reworking much of the
current
> application.
Probably so (if you don't have a way to identify "new" rows.
> > You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
> > EXISTS... you will have to fill in the blanks, but they should be
fairly
> > easy.
> Surely, I can do that, however I was hoping for some more automatic (or
> wizard-style) solution.
This is not much different than my previous idea...it simply requires you to
type up some Transact-SQL. Hey, no one said being a dba/developer/IT
Professional was easy.
> -- Pavils.
>
--
Keith
Having an updated base every day
Hello,
I have two symmetrical databases on two SQL servers - one remote, the
other - local. The data in the remote server changes every minute. The data
in local SQL server is used for creating some reports and general testing.
Sometimes data from the local base can be irreversibly deleted.
I want to have some sort of procedure, that would do one-direction
synchronisation of the two databases. The data in the remote database should
overwrite the data in the local database, so that after the update the local
database is exact copy of the remote database. Of course, this kind of
result could be achieved by detaching the remote base and bringing over the
database files, or doing full DTS. However, detaching and transfering is not
a viable solution since the remote base is actively used and the size of
data files are considerable; And, doing full DTS would take years. There is
some need to do some comparison of record hashes etc., so that the size of
the data that have to be transfered is reduced to minimum.
Thanks,
PavilsThe first thing that comes to mind is BACKUP and RESTORE. After you restore
a database backup you will have an exact copy of the original (as of the
date and time the backup completed).
You mention that the size of the data files are "considerable." I am not
sure what that means, or if that is a limitation for you in your
environment.
You might be able to look into replication.
Another option: if you have datetime columns (such as DateUpdated or
DateInserted) stored on your tables you could create your own shipping
routines...simply find the "recent" data (whatever that means to you) and
insert it.
You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
EXISTS... you will have to fill in the blanks, but they should be fairly
easy.
Keith
"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:OsrHETCkEHA.3664@.TK2MSFTNGP12.phx.gbl...
> Hello,
> I have two symmetrical databases on two SQL servers - one remote, the
> other - local. The data in the remote server changes every minute. The
data
> in local SQL server is used for creating some reports and general testing.
> Sometimes data from the local base can be irreversibly deleted.
> I want to have some sort of procedure, that would do one-direction
> synchronisation of the two databases. The data in the remote database
should
> overwrite the data in the local database, so that after the update the
local
> database is exact copy of the remote database. Of course, this kind of
> result could be achieved by detaching the remote base and bringing over
the
> database files, or doing full DTS. However, detaching and transfering is
not
> a viable solution since the remote base is actively used and the size of
> data files are considerable; And, doing full DTS would take years. There
is
> some need to do some comparison of record hashes etc., so that the size of
> the data that have to be transfered is reduced to minimum.
> Thanks,
> Pavils
>|||Hello Keith,
> You mention that the size of the data files are "considerable." I am not
> sure what that means, or if that is a limitation for you in your
> environment.
The idea is that I don't want to transfer every single bit of the whole
database.
> You might be able to look into replication.
Would replication allow to create a fully matching copy wit using as little
data xfer as possible?
> Another option: if you have datetime columns (such as DateUpdated or
> DateInserted) stored on your tables you could create your own shipping
> routines...simply find the "recent" data (whatever that means to you) and
> insert it.
That could be a good solution, but it requires reworking much of the current
application.
> You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
> EXISTS... you will have to fill in the blanks, but they should be fairly
> easy.
Surely, I can do that, however I was hoping for some more automatic (or
wizard-style) solution.
-- Pavils.|||"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:uUQ1L7CkEHA.3968@.TK2MSFTNGP11.phx.gbl...
> Hello Keith,
>
not[vbcol=seagreen]
> The idea is that I don't want to transfer every single bit of the whole
> database.
understood
>
> Would replication allow to create a fully matching copy wit using as
little
> data xfer as possible?
That is the point of replication
>
and[vbcol=seagreen]
> That could be a good solution, but it requires reworking much of the
current
> application.
Probably so (if you don't have a way to identify "new" rows.
>
fairly[vbcol=seagreen]
> Surely, I can do that, however I was hoping for some more automatic (or
> wizard-style) solution.
This is not much different than my previous idea...it simply requires you to
type up some Transact-SQL. Hey, no one said being a dba/developer/IT
Professional was easy.
> -- Pavils.
>
Keithsql
I have two symmetrical databases on two SQL servers - one remote, the
other - local. The data in the remote server changes every minute. The data
in local SQL server is used for creating some reports and general testing.
Sometimes data from the local base can be irreversibly deleted.
I want to have some sort of procedure, that would do one-direction
synchronisation of the two databases. The data in the remote database should
overwrite the data in the local database, so that after the update the local
database is exact copy of the remote database. Of course, this kind of
result could be achieved by detaching the remote base and bringing over the
database files, or doing full DTS. However, detaching and transfering is not
a viable solution since the remote base is actively used and the size of
data files are considerable; And, doing full DTS would take years. There is
some need to do some comparison of record hashes etc., so that the size of
the data that have to be transfered is reduced to minimum.
Thanks,
PavilsThe first thing that comes to mind is BACKUP and RESTORE. After you restore
a database backup you will have an exact copy of the original (as of the
date and time the backup completed).
You mention that the size of the data files are "considerable." I am not
sure what that means, or if that is a limitation for you in your
environment.
You might be able to look into replication.
Another option: if you have datetime columns (such as DateUpdated or
DateInserted) stored on your tables you could create your own shipping
routines...simply find the "recent" data (whatever that means to you) and
insert it.
You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
EXISTS... you will have to fill in the blanks, but they should be fairly
easy.
Keith
"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:OsrHETCkEHA.3664@.TK2MSFTNGP12.phx.gbl...
> Hello,
> I have two symmetrical databases on two SQL servers - one remote, the
> other - local. The data in the remote server changes every minute. The
data
> in local SQL server is used for creating some reports and general testing.
> Sometimes data from the local base can be irreversibly deleted.
> I want to have some sort of procedure, that would do one-direction
> synchronisation of the two databases. The data in the remote database
should
> overwrite the data in the local database, so that after the update the
local
> database is exact copy of the remote database. Of course, this kind of
> result could be achieved by detaching the remote base and bringing over
the
> database files, or doing full DTS. However, detaching and transfering is
not
> a viable solution since the remote base is actively used and the size of
> data files are considerable; And, doing full DTS would take years. There
is
> some need to do some comparison of record hashes etc., so that the size of
> the data that have to be transfered is reduced to minimum.
> Thanks,
> Pavils
>|||Hello Keith,
> You mention that the size of the data files are "considerable." I am not
> sure what that means, or if that is a limitation for you in your
> environment.
The idea is that I don't want to transfer every single bit of the whole
database.
> You might be able to look into replication.
Would replication allow to create a fully matching copy wit using as little
data xfer as possible?
> Another option: if you have datetime columns (such as DateUpdated or
> DateInserted) stored on your tables you could create your own shipping
> routines...simply find the "recent" data (whatever that means to you) and
> insert it.
That could be a good solution, but it requires reworking much of the current
application.
> You could also use linked servers and INSERT INTO...SELECT...WHERE NOT
> EXISTS... you will have to fill in the blanks, but they should be fairly
> easy.
Surely, I can do that, however I was hoping for some more automatic (or
wizard-style) solution.
-- Pavils.|||"Pavils Jurjans" <pavils@.mailbox.riga.lv> wrote in message
news:uUQ1L7CkEHA.3968@.TK2MSFTNGP11.phx.gbl...
> Hello Keith,
>
not[vbcol=seagreen]
> The idea is that I don't want to transfer every single bit of the whole
> database.
understood
>
> Would replication allow to create a fully matching copy wit using as
little
> data xfer as possible?
That is the point of replication
>
and[vbcol=seagreen]
> That could be a good solution, but it requires reworking much of the
current
> application.
Probably so (if you don't have a way to identify "new" rows.
>
fairly[vbcol=seagreen]
> Surely, I can do that, however I was hoping for some more automatic (or
> wizard-style) solution.
This is not much different than my previous idea...it simply requires you to
type up some Transact-SQL. Hey, no one said being a dba/developer/IT
Professional was easy.
> -- Pavils.
>
Keithsql
Wednesday, March 21, 2012
has no parameters and arguments were supplied
I'm kind of new at doing something like this. I'm trying to pull data into a results that exists in one of two databases. It works fine with query analyzer
but gives me the error "has no parameters and arguments were supplied" when I try to convert it to a stored procedure.
The procedure is as follows, any help would be appreciated.
CREATE PROCEDURE sp_getInvoiceNoTest AS
declare @.InVoiceNo as VarChar(30)
delete From Invoice_NBR
Insert into Invoice_NBR (Tran_NBr,ADDR_Name,ADDR_Line2,ADDR_Line3,CITY_NAM E,State_Name,ADDR_NAME2,ADDR_LINE4,ADDR_LINE5,CITY _NAME2,State_Name2)
select a.TranNo,b.AddrLine1,b.AddrLine2,b.AddrLine3,b.cit y,b.StateID,c.AddrName,c.AddrLine2,c.AddrLine3,c.C ity,c.StateID
from Colucw17.Acuity_App.dbo.tarInvoice as a
inner join
Colucw17.Acuity_App.dbo.tciAddress as b
on a.BilltoAddrKey=b.AddrKey
inner join
Colucw17.Acuity_App.dbo.tciAddress as c
on a.BilltoAddrKey=c.AddrKey and a.BilltoAddrKey=a.BilltoCustAddrKey
inner join
Colucw17.Acuity_App.dbo.tarCustomer as d
on a.CustKey=d.CustKey
inner join
Colucw17.Acuity_App.dbo.tciContact as f
on a.confirmtoCntctKey=f.CntctKey
where a.CreateuserID<>'admin' and a.TranNo='@.InvoiceNo'
--Insert into Invoice_NBr (Tran_NBr,ADDR_Name,ADDR_Line2,ADDR_Line3,CITY_NAM E,State_Name,ADDR_NAME2,ADDR_LINE4,ADDR_LINE5,CITY _NAME2,State_Name2)
select a.TranNo,b.AddrLine1,b.AddrLine2,b.AddrLine3,b.cit y,b.StateID,c.AddrName,c.AddrLine2,c.AddrLine3,c.C ity,c.StateID
from Colucw17.CSM_App.dbo.tarInvoice as a
inner join
Colucw17.CSM_App.dbo.tciAddress as b
on a.BilltoAddrKey=b.AddrKey
inner join
Colucw17.CSM_App.dbo.tciAddress as c
on a.BilltoAddrKey=c.AddrKey and a.BilltoAddrKey=a.BilltoCustAddrKey
inner join
Colucw17.CSM_App.dbo.tarCustomer as d
on a.CustKey=d.CustKey
inner join
Colucw17.CSM_App.dbo.tciContact as f
on a.confirmtoCntctKey=f.CntctKey
where a.CreateuserID<>'admin' and a.TranNo='@.InvoiceNo'
GOCREATE PROCEDURE sp_getInvoiceNoTest AS
(
@.InVoiceNo as VarChar(30)
)
as
set nocount on
delete From Invoice_NBR
Insert into Invoice_NBR (Tran_NBr,ADDR_Name,ADDR_Line2,ADDR_Line3,CITY_NAM E,State_Name,ADDR_NAME2,ADDR_LINE4,ADDR_LINE5,CITY _NAME2,State_Name2)
select a.TranNo,b.AddrLine1,b.AddrLine2,b.AddrLine3,b.cit y,b.StateID,c.AddrName,c.AddrLine2,c.AddrLine3,c.C ity,c.StateID
from Colucw17.Acuity_App.dbo.tarInvoice as a
inner join
Colucw17.Acuity_App.dbo.tciAddress as b
on a.BilltoAddrKey=b.AddrKey
inner join
Colucw17.Acuity_App.dbo.tciAddress as c
on a.BilltoAddrKey=c.AddrKey and a.BilltoAddrKey=a.BilltoCustAddrKey
inner join
Colucw17.Acuity_App.dbo.tarCustomer as d
on a.CustKey=d.CustKey
inner join
Colucw17.Acuity_App.dbo.tciContact as f
on a.confirmtoCntctKey=f.CntctKey
where a.CreateuserID<>'admin' and a.TranNo=@.InvoiceNo
--Insert into Invoice_NBr (Tran_NBr,ADDR_Name,ADDR_Line2,ADDR_Line3,CITY_NAM E,State_Name,ADDR_NAME2,ADDR_LINE4,ADDR_LINE5,CITY _NAME2,State_Name2)
select a.TranNo,b.AddrLine1,b.AddrLine2,b.AddrLine3,b.cit y,b.StateID,c.AddrName,c.AddrLine2,c.AddrLine3,c.C ity,c.StateID
from Colucw17.CSM_App.dbo.tarInvoice as a
inner join
Colucw17.CSM_App.dbo.tciAddress as b
on a.BilltoAddrKey=b.AddrKey
inner join
Colucw17.CSM_App.dbo.tciAddress as c
on a.BilltoAddrKey=c.AddrKey and a.BilltoAddrKey=a.BilltoCustAddrKey
inner join
Colucw17.CSM_App.dbo.tarCustomer as d
on a.CustKey=d.CustKey
inner join
Colucw17.CSM_App.dbo.tciContact as f
on a.confirmtoCntctKey=f.CntctKey
where a.CreateuserID<>'admin' and a.TranNo=@.InvoiceNo
GO|||Using the code you supplied it generated the following error message when I checked the syntax:
. .
Error 170: Line 3: Incorrect syntax near @.InvoiceNo.
Line 11: Incorrect syntax near
Must declare the variable :@.Invo!ceNo:
Must declare the variable @.InvoiceNo.
I added the declare statement but iot still errored.|||drop the AS between @.InvoiceNo As VARCHAR|||Thanks Peso, that was it. You help is greatly appreciated. Hopefully, I'll get the hang of this stuff.
but gives me the error "has no parameters and arguments were supplied" when I try to convert it to a stored procedure.
The procedure is as follows, any help would be appreciated.
CREATE PROCEDURE sp_getInvoiceNoTest AS
declare @.InVoiceNo as VarChar(30)
delete From Invoice_NBR
Insert into Invoice_NBR (Tran_NBr,ADDR_Name,ADDR_Line2,ADDR_Line3,CITY_NAM E,State_Name,ADDR_NAME2,ADDR_LINE4,ADDR_LINE5,CITY _NAME2,State_Name2)
select a.TranNo,b.AddrLine1,b.AddrLine2,b.AddrLine3,b.cit y,b.StateID,c.AddrName,c.AddrLine2,c.AddrLine3,c.C ity,c.StateID
from Colucw17.Acuity_App.dbo.tarInvoice as a
inner join
Colucw17.Acuity_App.dbo.tciAddress as b
on a.BilltoAddrKey=b.AddrKey
inner join
Colucw17.Acuity_App.dbo.tciAddress as c
on a.BilltoAddrKey=c.AddrKey and a.BilltoAddrKey=a.BilltoCustAddrKey
inner join
Colucw17.Acuity_App.dbo.tarCustomer as d
on a.CustKey=d.CustKey
inner join
Colucw17.Acuity_App.dbo.tciContact as f
on a.confirmtoCntctKey=f.CntctKey
where a.CreateuserID<>'admin' and a.TranNo='@.InvoiceNo'
--Insert into Invoice_NBr (Tran_NBr,ADDR_Name,ADDR_Line2,ADDR_Line3,CITY_NAM E,State_Name,ADDR_NAME2,ADDR_LINE4,ADDR_LINE5,CITY _NAME2,State_Name2)
select a.TranNo,b.AddrLine1,b.AddrLine2,b.AddrLine3,b.cit y,b.StateID,c.AddrName,c.AddrLine2,c.AddrLine3,c.C ity,c.StateID
from Colucw17.CSM_App.dbo.tarInvoice as a
inner join
Colucw17.CSM_App.dbo.tciAddress as b
on a.BilltoAddrKey=b.AddrKey
inner join
Colucw17.CSM_App.dbo.tciAddress as c
on a.BilltoAddrKey=c.AddrKey and a.BilltoAddrKey=a.BilltoCustAddrKey
inner join
Colucw17.CSM_App.dbo.tarCustomer as d
on a.CustKey=d.CustKey
inner join
Colucw17.CSM_App.dbo.tciContact as f
on a.confirmtoCntctKey=f.CntctKey
where a.CreateuserID<>'admin' and a.TranNo='@.InvoiceNo'
GOCREATE PROCEDURE sp_getInvoiceNoTest AS
(
@.InVoiceNo as VarChar(30)
)
as
set nocount on
delete From Invoice_NBR
Insert into Invoice_NBR (Tran_NBr,ADDR_Name,ADDR_Line2,ADDR_Line3,CITY_NAM E,State_Name,ADDR_NAME2,ADDR_LINE4,ADDR_LINE5,CITY _NAME2,State_Name2)
select a.TranNo,b.AddrLine1,b.AddrLine2,b.AddrLine3,b.cit y,b.StateID,c.AddrName,c.AddrLine2,c.AddrLine3,c.C ity,c.StateID
from Colucw17.Acuity_App.dbo.tarInvoice as a
inner join
Colucw17.Acuity_App.dbo.tciAddress as b
on a.BilltoAddrKey=b.AddrKey
inner join
Colucw17.Acuity_App.dbo.tciAddress as c
on a.BilltoAddrKey=c.AddrKey and a.BilltoAddrKey=a.BilltoCustAddrKey
inner join
Colucw17.Acuity_App.dbo.tarCustomer as d
on a.CustKey=d.CustKey
inner join
Colucw17.Acuity_App.dbo.tciContact as f
on a.confirmtoCntctKey=f.CntctKey
where a.CreateuserID<>'admin' and a.TranNo=@.InvoiceNo
--Insert into Invoice_NBr (Tran_NBr,ADDR_Name,ADDR_Line2,ADDR_Line3,CITY_NAM E,State_Name,ADDR_NAME2,ADDR_LINE4,ADDR_LINE5,CITY _NAME2,State_Name2)
select a.TranNo,b.AddrLine1,b.AddrLine2,b.AddrLine3,b.cit y,b.StateID,c.AddrName,c.AddrLine2,c.AddrLine3,c.C ity,c.StateID
from Colucw17.CSM_App.dbo.tarInvoice as a
inner join
Colucw17.CSM_App.dbo.tciAddress as b
on a.BilltoAddrKey=b.AddrKey
inner join
Colucw17.CSM_App.dbo.tciAddress as c
on a.BilltoAddrKey=c.AddrKey and a.BilltoAddrKey=a.BilltoCustAddrKey
inner join
Colucw17.CSM_App.dbo.tarCustomer as d
on a.CustKey=d.CustKey
inner join
Colucw17.CSM_App.dbo.tciContact as f
on a.confirmtoCntctKey=f.CntctKey
where a.CreateuserID<>'admin' and a.TranNo=@.InvoiceNo
GO|||Using the code you supplied it generated the following error message when I checked the syntax:
. .
Error 170: Line 3: Incorrect syntax near @.InvoiceNo.
Line 11: Incorrect syntax near
Must declare the variable :@.Invo!ceNo:
Must declare the variable @.InvoiceNo.
I added the declare statement but iot still errored.|||drop the AS between @.InvoiceNo As VARCHAR|||Thanks Peso, that was it. You help is greatly appreciated. Hopefully, I'll get the hang of this stuff.
Monday, March 12, 2012
Hardware scalability
Dear all,
My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
2000 SP3a installed (around 90 databases in which around 10 are used daily
not intensively - total data size : 5.5Go - in which 10% is for daily used
databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
replication is implemented. The server is setup as editor only (distributor
and subscribers are on other powerful servers). All is working fine. Network
load is low (10Mo pikes for I/O). We have got around 20 frequent users on
this server.
We are planning to implement two new databases that should represents a
significant increase in workload (frequent heavy batches processes - 1Go of
data). Moreover, we need to integrate them in the transactional replication
process.
We are wondering if our hardware will be sufficient enough to support this
added workload. Yet, I've not found any rule to deduce hardware requirements
from databases size and use.
Could you give me clues for scaling my server, knowing that I have no
similar test server to make benchmarking? Maybe have you similar systems?
Thanks a lot,
Eric.
Eric,
You're correct there's not much to go on here. However, I would point out
one thing. Sounds like the server infrequently services reasonably short
requests. That indicates that the single processor is probably keeping up
with the requests because its generally only getting one request at a time.
So the responsiveness to the users is acceptable. By integrating heavy
batch processes into the mix, there's a strong likelihood that SQL won't
have an internal scheduler free when a user request is initiated. A mix of
heavy batch or large query, with OLTP on too few processors usually results
in end users waiting on screens.
"itparis" <itparis@.discussions.microsoft.com> wrote in message
news:DF71D8B2-6184-4631-82F3-6FE96FA81514@.microsoft.com...
> Dear all,
> My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
> 2000 SP3a installed (around 90 databases in which around 10 are used daily
> not intensively - total data size : 5.5Go - in which 10% is for daily used
> databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
> replication is implemented. The server is setup as editor only
> (distributor
> and subscribers are on other powerful servers). All is working fine.
> Network
> load is low (10Mo pikes for I/O). We have got around 20 frequent users on
> this server.
> We are planning to implement two new databases that should represents a
> significant increase in workload (frequent heavy batches processes - 1Go
> of
> data). Moreover, we need to integrate them in the transactional
> replication
> process.
> We are wondering if our hardware will be sufficient enough to support this
> added workload. Yet, I've not found any rule to deduce hardware
> requirements
> from databases size and use.
> Could you give me clues for scaling my server, knowing that I have no
> similar test server to make benchmarking? Maybe have you similar systems?
> Thanks a lot,
> Eric.
|||"Danny" <someone@.nowhere.com> wrote in message
news:KpdWe.6396$XO6.2458@.trnddc03...
> Eric,
> You're correct there's not much to go on here. However, I would point out
> one thing. Sounds like the server infrequently services reasonably short
> requests. That indicates that the single processor is probably keeping up
> with the requests because its generally only getting one request at a
> time. So the responsiveness to the users is acceptable. By integrating
> heavy batch processes into the mix, there's a strong likelihood that SQL
> won't have an internal scheduler free when a user request is initiated. A
> mix of heavy batch or large query, with OLTP on too few processors usually
> results in end users waiting on screens.
>
I have to agree with Danny on this one. The right answer (as always with
database) is, It depends.
Do you want to optimize your server for general usage, or do you want to
optimize your server to handle the spikes in performance.
For general usage, I would suggest that you add more RAM to the box. 4GB
total and give 2GB to SQL Server. You will still have spikes, most likely
due to the processor running heavy batches, but should otherwise be in
decent shape. On the replication side of the house, depending on the size
of your transactions which are being replicated and how often replication
occurs (immediate, every 15 minutes etc.). You may want to upgrade your NIC
if possible to 100MB or even 1GB.
If you want to optimize to handle the spikes, then more RAM, 2 procs with
higher speeds and larger L2 caches should help out.
You can read up on a lot of the perf counters to watch for at
www.sql-server-performance.com Take a look at McGeHee's article... It's a
great first step...
http://www.sql-server-performance.co...ance_audit.asp
Rick Sawtell
MCT, MCSD, MCDBA
|||The "standard" config for a dedicated and heavy-duty SQLServer
hardware is 2-processors, all the RAM you can get, at least separate
physical drive for log files, generally RAID-5 for the main DBs.
These days with 200gb drives going for a hundred bux you don't need
RAID just to get your storage size up, but it still helps isolate
physical storage concerns. Network-attached storage is even better,
if you have gigahertz networks. And oh yes, Windows2003, makes
hyperthreading work and has better general threading and COM 1.5+.
Click up Dell and configure such a server, betcha can get a couple of
3ghz processors starting around, um, ... $10k? $15k? Depends. Once
you reach blade-scale, adding another processor is cheap.
Let's say a proper current box like this would be around 5x faster
than a single PIII with a single physical disk drive.
J.
On Thu, 15 Sep 2005 02:00:07 -0700, "itparis"
<itparis@.discussions.microsoft.com> wrote:
>Dear all,
>My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
>2000 SP3a installed (around 90 databases in which around 10 are used daily
>not intensively - total data size : 5.5Go - in which 10% is for daily used
>databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
>replication is implemented. The server is setup as editor only (distributor
>and subscribers are on other powerful servers). All is working fine. Network
>load is low (10Mo pikes for I/O). We have got around 20 frequent users on
>this server.
>We are planning to implement two new databases that should represents a
>significant increase in workload (frequent heavy batches processes - 1Go of
>data). Moreover, we need to integrate them in the transactional replication
>process.
>We are wondering if our hardware will be sufficient enough to support this
>added workload. Yet, I've not found any rule to deduce hardware requirements
>from databases size and use.
>Could you give me clues for scaling my server, knowing that I have no
>similar test server to make benchmarking? Maybe have you similar systems?
>Thanks a lot,
>Eric.
My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
2000 SP3a installed (around 90 databases in which around 10 are used daily
not intensively - total data size : 5.5Go - in which 10% is for daily used
databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
replication is implemented. The server is setup as editor only (distributor
and subscribers are on other powerful servers). All is working fine. Network
load is low (10Mo pikes for I/O). We have got around 20 frequent users on
this server.
We are planning to implement two new databases that should represents a
significant increase in workload (frequent heavy batches processes - 1Go of
data). Moreover, we need to integrate them in the transactional replication
process.
We are wondering if our hardware will be sufficient enough to support this
added workload. Yet, I've not found any rule to deduce hardware requirements
from databases size and use.
Could you give me clues for scaling my server, knowing that I have no
similar test server to make benchmarking? Maybe have you similar systems?
Thanks a lot,
Eric.
Eric,
You're correct there's not much to go on here. However, I would point out
one thing. Sounds like the server infrequently services reasonably short
requests. That indicates that the single processor is probably keeping up
with the requests because its generally only getting one request at a time.
So the responsiveness to the users is acceptable. By integrating heavy
batch processes into the mix, there's a strong likelihood that SQL won't
have an internal scheduler free when a user request is initiated. A mix of
heavy batch or large query, with OLTP on too few processors usually results
in end users waiting on screens.
"itparis" <itparis@.discussions.microsoft.com> wrote in message
news:DF71D8B2-6184-4631-82F3-6FE96FA81514@.microsoft.com...
> Dear all,
> My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
> 2000 SP3a installed (around 90 databases in which around 10 are used daily
> not intensively - total data size : 5.5Go - in which 10% is for daily used
> databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
> replication is implemented. The server is setup as editor only
> (distributor
> and subscribers are on other powerful servers). All is working fine.
> Network
> load is low (10Mo pikes for I/O). We have got around 20 frequent users on
> this server.
> We are planning to implement two new databases that should represents a
> significant increase in workload (frequent heavy batches processes - 1Go
> of
> data). Moreover, we need to integrate them in the transactional
> replication
> process.
> We are wondering if our hardware will be sufficient enough to support this
> added workload. Yet, I've not found any rule to deduce hardware
> requirements
> from databases size and use.
> Could you give me clues for scaling my server, knowing that I have no
> similar test server to make benchmarking? Maybe have you similar systems?
> Thanks a lot,
> Eric.
|||"Danny" <someone@.nowhere.com> wrote in message
news:KpdWe.6396$XO6.2458@.trnddc03...
> Eric,
> You're correct there's not much to go on here. However, I would point out
> one thing. Sounds like the server infrequently services reasonably short
> requests. That indicates that the single processor is probably keeping up
> with the requests because its generally only getting one request at a
> time. So the responsiveness to the users is acceptable. By integrating
> heavy batch processes into the mix, there's a strong likelihood that SQL
> won't have an internal scheduler free when a user request is initiated. A
> mix of heavy batch or large query, with OLTP on too few processors usually
> results in end users waiting on screens.
>
I have to agree with Danny on this one. The right answer (as always with
database) is, It depends.
Do you want to optimize your server for general usage, or do you want to
optimize your server to handle the spikes in performance.
For general usage, I would suggest that you add more RAM to the box. 4GB
total and give 2GB to SQL Server. You will still have spikes, most likely
due to the processor running heavy batches, but should otherwise be in
decent shape. On the replication side of the house, depending on the size
of your transactions which are being replicated and how often replication
occurs (immediate, every 15 minutes etc.). You may want to upgrade your NIC
if possible to 100MB or even 1GB.
If you want to optimize to handle the spikes, then more RAM, 2 procs with
higher speeds and larger L2 caches should help out.
You can read up on a lot of the perf counters to watch for at
www.sql-server-performance.com Take a look at McGeHee's article... It's a
great first step...
http://www.sql-server-performance.co...ance_audit.asp
Rick Sawtell
MCT, MCSD, MCDBA
|||The "standard" config for a dedicated and heavy-duty SQLServer
hardware is 2-processors, all the RAM you can get, at least separate
physical drive for log files, generally RAID-5 for the main DBs.
These days with 200gb drives going for a hundred bux you don't need
RAID just to get your storage size up, but it still helps isolate
physical storage concerns. Network-attached storage is even better,
if you have gigahertz networks. And oh yes, Windows2003, makes
hyperthreading work and has better general threading and COM 1.5+.
Click up Dell and configure such a server, betcha can get a couple of
3ghz processors starting around, um, ... $10k? $15k? Depends. Once
you reach blade-scale, adding another processor is cheap.
Let's say a proper current box like this would be around 5x faster
than a single PIII with a single physical disk drive.
J.
On Thu, 15 Sep 2005 02:00:07 -0700, "itparis"
<itparis@.discussions.microsoft.com> wrote:
>Dear all,
>My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
>2000 SP3a installed (around 90 databases in which around 10 are used daily
>not intensively - total data size : 5.5Go - in which 10% is for daily used
>databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
>replication is implemented. The server is setup as editor only (distributor
>and subscribers are on other powerful servers). All is working fine. Network
>load is low (10Mo pikes for I/O). We have got around 20 frequent users on
>this server.
>We are planning to implement two new databases that should represents a
>significant increase in workload (frequent heavy batches processes - 1Go of
>data). Moreover, we need to integrate them in the transactional replication
>process.
>We are wondering if our hardware will be sufficient enough to support this
>added workload. Yet, I've not found any rule to deduce hardware requirements
>from databases size and use.
>Could you give me clues for scaling my server, knowing that I have no
>similar test server to make benchmarking? Maybe have you similar systems?
>Thanks a lot,
>Eric.
Hardware scalability
Dear all,
My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
2000 SP3a installed (around 90 databases in which around 10 are used daily
not intensively - total data size : 5.5Go - in which 10% is for daily used
databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
replication is implemented. The server is setup as editor only (distributor
and subscribers are on other powerful servers). All is working fine. Network
load is low (10Mo pikes for I/O). We have got around 20 frequent users on
this server.
We are planning to implement two new databases that should represents a
significant increase in workload (frequent heavy batches processes - 1Go of
data). Moreover, we need to integrate them in the transactional replication
process.
We are wondering if our hardware will be sufficient enough to support this
added workload. Yet, I've not found any rule to deduce hardware requirements
from databases size and use.
Could you give me clues for scaling my server, knowing that I have no
similar test server to make benchmarking? Maybe have you similar systems?
Thanks a lot,
Eric.Eric,
You're correct there's not much to go on here. However, I would point out
one thing. Sounds like the server infrequently services reasonably short
requests. That indicates that the single processor is probably keeping up
with the requests because its generally only getting one request at a time.
So the responsiveness to the users is acceptable. By integrating heavy
batch processes into the mix, there's a strong likelihood that SQL won't
have an internal scheduler free when a user request is initiated. A mix of
heavy batch or large query, with OLTP on too few processors usually results
in end users waiting on screens.
"itparis" <itparis@.discussions.microsoft.com> wrote in message
news:DF71D8B2-6184-4631-82F3-6FE96FA81514@.microsoft.com...
> Dear all,
> My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
> 2000 SP3a installed (around 90 databases in which around 10 are used daily
> not intensively - total data size : 5.5Go - in which 10% is for daily used
> databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
> replication is implemented. The server is setup as editor only
> (distributor
> and subscribers are on other powerful servers). All is working fine.
> Network
> load is low (10Mo pikes for I/O). We have got around 20 frequent users on
> this server.
> We are planning to implement two new databases that should represents a
> significant increase in workload (frequent heavy batches processes - 1Go
> of
> data). Moreover, we need to integrate them in the transactional
> replication
> process.
> We are wondering if our hardware will be sufficient enough to support this
> added workload. Yet, I've not found any rule to deduce hardware
> requirements
> from databases size and use.
> Could you give me clues for scaling my server, knowing that I have no
> similar test server to make benchmarking? Maybe have you similar systems?
> Thanks a lot,
> Eric.|||"Danny" <someone@.nowhere.com> wrote in message
news:KpdWe.6396$XO6.2458@.trnddc03...
> Eric,
> You're correct there's not much to go on here. However, I would point out
> one thing. Sounds like the server infrequently services reasonably short
> requests. That indicates that the single processor is probably keeping up
> with the requests because its generally only getting one request at a
> time. So the responsiveness to the users is acceptable. By integrating
> heavy batch processes into the mix, there's a strong likelihood that SQL
> won't have an internal scheduler free when a user request is initiated. A
> mix of heavy batch or large query, with OLTP on too few processors usually
> results in end users waiting on screens.
>
I have to agree with Danny on this one. The right answer (as always with
database) is, It depends.
Do you want to optimize your server for general usage, or do you want to
optimize your server to handle the spikes in performance.
For general usage, I would suggest that you add more RAM to the box. 4GB
total and give 2GB to SQL Server. You will still have spikes, most likely
due to the processor running heavy batches, but should otherwise be in
decent shape. On the replication side of the house, depending on the size
of your transactions which are being replicated and how often replication
occurs (immediate, every 15 minutes etc.). You may want to upgrade your NIC
if possible to 100MB or even 1GB.
If you want to optimize to handle the spikes, then more RAM, 2 procs with
higher speeds and larger L2 caches should help out.
You can read up on a lot of the perf counters to watch for at
www.sql-server-performance.com Take a look at McGeHee's article... It's a
great first step...
http://www.sql-server-performance.com/sql_server_performance_audit.asp
Rick Sawtell
MCT, MCSD, MCDBA|||The "standard" config for a dedicated and heavy-duty SQLServer
hardware is 2-processors, all the RAM you can get, at least separate
physical drive for log files, generally RAID-5 for the main DBs.
These days with 200gb drives going for a hundred bux you don't need
RAID just to get your storage size up, but it still helps isolate
physical storage concerns. Network-attached storage is even better,
if you have gigahertz networks. And oh yes, Windows2003, makes
hyperthreading work and has better general threading and COM 1.5+.
Click up Dell and configure such a server, betcha can get a couple of
3ghz processors starting around, um, ... $10k? $15k? Depends. Once
you reach blade-scale, adding another processor is cheap.
Let's say a proper current box like this would be around 5x faster
than a single PIII with a single physical disk drive.
J.
On Thu, 15 Sep 2005 02:00:07 -0700, "itparis"
<itparis@.discussions.microsoft.com> wrote:
>Dear all,
>My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
>2000 SP3a installed (around 90 databases in which around 10 are used daily
>not intensively - total data size : 5.5Go - in which 10% is for daily used
>databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
>replication is implemented. The server is setup as editor only (distributor
>and subscribers are on other powerful servers). All is working fine. Network
>load is low (10Mo pikes for I/O). We have got around 20 frequent users on
>this server.
>We are planning to implement two new databases that should represents a
>significant increase in workload (frequent heavy batches processes - 1Go of
>data). Moreover, we need to integrate them in the transactional replication
>process.
>We are wondering if our hardware will be sufficient enough to support this
>added workload. Yet, I've not found any rule to deduce hardware requirements
>from databases size and use.
>Could you give me clues for scaling my server, knowing that I have no
>similar test server to make benchmarking? Maybe have you similar systems?
>Thanks a lot,
>Eric.
My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
2000 SP3a installed (around 90 databases in which around 10 are used daily
not intensively - total data size : 5.5Go - in which 10% is for daily used
databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
replication is implemented. The server is setup as editor only (distributor
and subscribers are on other powerful servers). All is working fine. Network
load is low (10Mo pikes for I/O). We have got around 20 frequent users on
this server.
We are planning to implement two new databases that should represents a
significant increase in workload (frequent heavy batches processes - 1Go of
data). Moreover, we need to integrate them in the transactional replication
process.
We are wondering if our hardware will be sufficient enough to support this
added workload. Yet, I've not found any rule to deduce hardware requirements
from databases size and use.
Could you give me clues for scaling my server, knowing that I have no
similar test server to make benchmarking? Maybe have you similar systems?
Thanks a lot,
Eric.Eric,
You're correct there's not much to go on here. However, I would point out
one thing. Sounds like the server infrequently services reasonably short
requests. That indicates that the single processor is probably keeping up
with the requests because its generally only getting one request at a time.
So the responsiveness to the users is acceptable. By integrating heavy
batch processes into the mix, there's a strong likelihood that SQL won't
have an internal scheduler free when a user request is initiated. A mix of
heavy batch or large query, with OLTP on too few processors usually results
in end users waiting on screens.
"itparis" <itparis@.discussions.microsoft.com> wrote in message
news:DF71D8B2-6184-4631-82F3-6FE96FA81514@.microsoft.com...
> Dear all,
> My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
> 2000 SP3a installed (around 90 databases in which around 10 are used daily
> not intensively - total data size : 5.5Go - in which 10% is for daily used
> databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
> replication is implemented. The server is setup as editor only
> (distributor
> and subscribers are on other powerful servers). All is working fine.
> Network
> load is low (10Mo pikes for I/O). We have got around 20 frequent users on
> this server.
> We are planning to implement two new databases that should represents a
> significant increase in workload (frequent heavy batches processes - 1Go
> of
> data). Moreover, we need to integrate them in the transactional
> replication
> process.
> We are wondering if our hardware will be sufficient enough to support this
> added workload. Yet, I've not found any rule to deduce hardware
> requirements
> from databases size and use.
> Could you give me clues for scaling my server, knowing that I have no
> similar test server to make benchmarking? Maybe have you similar systems?
> Thanks a lot,
> Eric.|||"Danny" <someone@.nowhere.com> wrote in message
news:KpdWe.6396$XO6.2458@.trnddc03...
> Eric,
> You're correct there's not much to go on here. However, I would point out
> one thing. Sounds like the server infrequently services reasonably short
> requests. That indicates that the single processor is probably keeping up
> with the requests because its generally only getting one request at a
> time. So the responsiveness to the users is acceptable. By integrating
> heavy batch processes into the mix, there's a strong likelihood that SQL
> won't have an internal scheduler free when a user request is initiated. A
> mix of heavy batch or large query, with OLTP on too few processors usually
> results in end users waiting on screens.
>
I have to agree with Danny on this one. The right answer (as always with
database) is, It depends.
Do you want to optimize your server for general usage, or do you want to
optimize your server to handle the spikes in performance.
For general usage, I would suggest that you add more RAM to the box. 4GB
total and give 2GB to SQL Server. You will still have spikes, most likely
due to the processor running heavy batches, but should otherwise be in
decent shape. On the replication side of the house, depending on the size
of your transactions which are being replicated and how often replication
occurs (immediate, every 15 minutes etc.). You may want to upgrade your NIC
if possible to 100MB or even 1GB.
If you want to optimize to handle the spikes, then more RAM, 2 procs with
higher speeds and larger L2 caches should help out.
You can read up on a lot of the perf counters to watch for at
www.sql-server-performance.com Take a look at McGeHee's article... It's a
great first step...
http://www.sql-server-performance.com/sql_server_performance_audit.asp
Rick Sawtell
MCT, MCSD, MCDBA|||The "standard" config for a dedicated and heavy-duty SQLServer
hardware is 2-processors, all the RAM you can get, at least separate
physical drive for log files, generally RAID-5 for the main DBs.
These days with 200gb drives going for a hundred bux you don't need
RAID just to get your storage size up, but it still helps isolate
physical storage concerns. Network-attached storage is even better,
if you have gigahertz networks. And oh yes, Windows2003, makes
hyperthreading work and has better general threading and COM 1.5+.
Click up Dell and configure such a server, betcha can get a couple of
3ghz processors starting around, um, ... $10k? $15k? Depends. Once
you reach blade-scale, adding another processor is cheap.
Let's say a proper current box like this would be around 5x faster
than a single PIII with a single physical disk drive.
J.
On Thu, 15 Sep 2005 02:00:07 -0700, "itparis"
<itparis@.discussions.microsoft.com> wrote:
>Dear all,
>My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
>2000 SP3a installed (around 90 databases in which around 10 are used daily
>not intensively - total data size : 5.5Go - in which 10% is for daily used
>databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
>replication is implemented. The server is setup as editor only (distributor
>and subscribers are on other powerful servers). All is working fine. Network
>load is low (10Mo pikes for I/O). We have got around 20 frequent users on
>this server.
>We are planning to implement two new databases that should represents a
>significant increase in workload (frequent heavy batches processes - 1Go of
>data). Moreover, we need to integrate them in the transactional replication
>process.
>We are wondering if our hardware will be sufficient enough to support this
>added workload. Yet, I've not found any rule to deduce hardware requirements
>from databases size and use.
>Could you give me clues for scaling my server, knowing that I have no
>similar test server to make benchmarking? Maybe have you similar systems?
>Thanks a lot,
>Eric.
Hardware scalability
Dear all,
My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
2000 SP3a installed (around 90 databases in which around 10 are used daily
not intensively - total data size : 5.5Go - in which 10% is for daily used
databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
replication is implemented. The server is setup as editor only (distributor
and subscribers are on other powerful servers). All is working fine. Network
load is low (10Mo pikes for I/O). We have got around 20 frequent users on
this server.
We are planning to implement two new databases that should represents a
significant increase in workload (frequent heavy batches processes - 1Go of
data). Moreover, we need to integrate them in the transactional replication
process.
We are wondering if our hardware will be sufficient enough to support this
added workload. Yet, I've not found any rule to deduce hardware requirements
from databases size and use.
Could you give me clues for scaling my server, knowing that I have no
similar test server to make benchmarking? Maybe have you similar systems?
Thanks a lot,
Eric.Eric,
You're correct there's not much to go on here. However, I would point out
one thing. Sounds like the server infrequently services reasonably short
requests. That indicates that the single processor is probably keeping up
with the requests because its generally only getting one request at a time.
So the responsiveness to the users is acceptable. By integrating heavy
batch processes into the mix, there's a strong likelihood that SQL won't
have an internal scheduler free when a user request is initiated. A mix of
heavy batch or large query, with OLTP on too few processors usually results
in end users waiting on screens.
"itparis" <itparis@.discussions.microsoft.com> wrote in message
news:DF71D8B2-6184-4631-82F3-6FE96FA81514@.microsoft.com...
> Dear all,
> My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
> 2000 SP3a installed (around 90 databases in which around 10 are used daily
> not intensively - total data size : 5.5Go - in which 10% is for daily used
> databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
> replication is implemented. The server is setup as editor only
> (distributor
> and subscribers are on other powerful servers). All is working fine.
> Network
> load is low (10Mo pikes for I/O). We have got around 20 frequent users on
> this server.
> We are planning to implement two new databases that should represents a
> significant increase in workload (frequent heavy batches processes - 1Go
> of
> data). Moreover, we need to integrate them in the transactional
> replication
> process.
> We are wondering if our hardware will be sufficient enough to support this
> added workload. Yet, I've not found any rule to deduce hardware
> requirements
> from databases size and use.
> Could you give me clues for scaling my server, knowing that I have no
> similar test server to make benchmarking? Maybe have you similar systems?
> Thanks a lot,
> Eric.|||"Danny" <someone@.nowhere.com> wrote in message
news:KpdWe.6396$XO6.2458@.trnddc03...
> Eric,
> You're correct there's not much to go on here. However, I would point out
> one thing. Sounds like the server infrequently services reasonably short
> requests. That indicates that the single processor is probably keeping up
> with the requests because its generally only getting one request at a
> time. So the responsiveness to the users is acceptable. By integrating
> heavy batch processes into the mix, there's a strong likelihood that SQL
> won't have an internal scheduler free when a user request is initiated. A
> mix of heavy batch or large query, with OLTP on too few processors usually
> results in end users waiting on screens.
>
I have to agree with Danny on this one. The right answer (as always with
database) is, It depends.
Do you want to optimize your server for general usage, or do you want to
optimize your server to handle the spikes in performance.
For general usage, I would suggest that you add more RAM to the box. 4GB
total and give 2GB to SQL Server. You will still have spikes, most likely
due to the processor running heavy batches, but should otherwise be in
decent shape. On the replication side of the house, depending on the size
of your transactions which are being replicated and how often replication
occurs (immediate, every 15 minutes etc.). You may want to upgrade your NIC
if possible to 100MB or even 1GB.
If you want to optimize to handle the spikes, then more RAM, 2 procs with
higher speeds and larger L2 caches should help out.
You can read up on a lot of the perf counters to watch for at
www.sql-server-performance.com Take a look at McGeHee's article... It's a
great first step...
http://www.sql-server-performance.c...mance_audit.asp
Rick Sawtell
MCT, MCSD, MCDBA|||The "standard" config for a dedicated and heavy-duty SQLServer
hardware is 2-processors, all the RAM you can get, at least separate
physical drive for log files, generally RAID-5 for the main DBs.
These days with 200gb drives going for a hundred bux you don't need
RAID just to get your storage size up, but it still helps isolate
physical storage concerns. Network-attached storage is even better,
if you have gigahertz networks. And oh yes, Windows2003, makes
hyperthreading work and has better general threading and COM 1.5+.
Click up Dell and configure such a server, betcha can get a couple of
3ghz processors starting around, um, ... $10k? $15k? Depends. Once
you reach blade-scale, adding another processor is cheap.
Let's say a proper current box like this would be around 5x faster
than a single PIII with a single physical disk drive.
J.
On Thu, 15 Sep 2005 02:00:07 -0700, "itparis"
<itparis@.discussions.microsoft.com> wrote:
>Dear all,
>My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
>2000 SP3a installed (around 90 databases in which around 10 are used daily
>not intensively - total data size : 5.5Go - in which 10% is for daily used
>databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
>replication is implemented. The server is setup as editor only (distributor
>and subscribers are on other powerful servers). All is working fine. Networ
k
>load is low (10Mo pikes for I/O). We have got around 20 frequent users on
>this server.
>We are planning to implement two new databases that should represents a
>significant increase in workload (frequent heavy batches processes - 1Go of
>data). Moreover, we need to integrate them in the transactional replication
>process.
>We are wondering if our hardware will be sufficient enough to support this
>added workload. Yet, I've not found any rule to deduce hardware requirement
s
>from databases size and use.
>Could you give me clues for scaling my server, knowing that I have no
>similar test server to make benchmarking? Maybe have you similar systems?
>Thanks a lot,
>Eric.
My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
2000 SP3a installed (around 90 databases in which around 10 are used daily
not intensively - total data size : 5.5Go - in which 10% is for daily used
databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
replication is implemented. The server is setup as editor only (distributor
and subscribers are on other powerful servers). All is working fine. Network
load is low (10Mo pikes for I/O). We have got around 20 frequent users on
this server.
We are planning to implement two new databases that should represents a
significant increase in workload (frequent heavy batches processes - 1Go of
data). Moreover, we need to integrate them in the transactional replication
process.
We are wondering if our hardware will be sufficient enough to support this
added workload. Yet, I've not found any rule to deduce hardware requirements
from databases size and use.
Could you give me clues for scaling my server, knowing that I have no
similar test server to make benchmarking? Maybe have you similar systems?
Thanks a lot,
Eric.Eric,
You're correct there's not much to go on here. However, I would point out
one thing. Sounds like the server infrequently services reasonably short
requests. That indicates that the single processor is probably keeping up
with the requests because its generally only getting one request at a time.
So the responsiveness to the users is acceptable. By integrating heavy
batch processes into the mix, there's a strong likelihood that SQL won't
have an internal scheduler free when a user request is initiated. A mix of
heavy batch or large query, with OLTP on too few processors usually results
in end users waiting on screens.
"itparis" <itparis@.discussions.microsoft.com> wrote in message
news:DF71D8B2-6184-4631-82F3-6FE96FA81514@.microsoft.com...
> Dear all,
> My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
> 2000 SP3a installed (around 90 databases in which around 10 are used daily
> not intensively - total data size : 5.5Go - in which 10% is for daily used
> databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
> replication is implemented. The server is setup as editor only
> (distributor
> and subscribers are on other powerful servers). All is working fine.
> Network
> load is low (10Mo pikes for I/O). We have got around 20 frequent users on
> this server.
> We are planning to implement two new databases that should represents a
> significant increase in workload (frequent heavy batches processes - 1Go
> of
> data). Moreover, we need to integrate them in the transactional
> replication
> process.
> We are wondering if our hardware will be sufficient enough to support this
> added workload. Yet, I've not found any rule to deduce hardware
> requirements
> from databases size and use.
> Could you give me clues for scaling my server, knowing that I have no
> similar test server to make benchmarking? Maybe have you similar systems?
> Thanks a lot,
> Eric.|||"Danny" <someone@.nowhere.com> wrote in message
news:KpdWe.6396$XO6.2458@.trnddc03...
> Eric,
> You're correct there's not much to go on here. However, I would point out
> one thing. Sounds like the server infrequently services reasonably short
> requests. That indicates that the single processor is probably keeping up
> with the requests because its generally only getting one request at a
> time. So the responsiveness to the users is acceptable. By integrating
> heavy batch processes into the mix, there's a strong likelihood that SQL
> won't have an internal scheduler free when a user request is initiated. A
> mix of heavy batch or large query, with OLTP on too few processors usually
> results in end users waiting on screens.
>
I have to agree with Danny on this one. The right answer (as always with
database) is, It depends.
Do you want to optimize your server for general usage, or do you want to
optimize your server to handle the spikes in performance.
For general usage, I would suggest that you add more RAM to the box. 4GB
total and give 2GB to SQL Server. You will still have spikes, most likely
due to the processor running heavy batches, but should otherwise be in
decent shape. On the replication side of the house, depending on the size
of your transactions which are being replicated and how often replication
occurs (immediate, every 15 minutes etc.). You may want to upgrade your NIC
if possible to 100MB or even 1GB.
If you want to optimize to handle the spikes, then more RAM, 2 procs with
higher speeds and larger L2 caches should help out.
You can read up on a lot of the perf counters to watch for at
www.sql-server-performance.com Take a look at McGeHee's article... It's a
great first step...
http://www.sql-server-performance.c...mance_audit.asp
Rick Sawtell
MCT, MCSD, MCDBA|||The "standard" config for a dedicated and heavy-duty SQLServer
hardware is 2-processors, all the RAM you can get, at least separate
physical drive for log files, generally RAID-5 for the main DBs.
These days with 200gb drives going for a hundred bux you don't need
RAID just to get your storage size up, but it still helps isolate
physical storage concerns. Network-attached storage is even better,
if you have gigahertz networks. And oh yes, Windows2003, makes
hyperthreading work and has better general threading and COM 1.5+.
Click up Dell and configure such a server, betcha can get a couple of
3ghz processors starting around, um, ... $10k? $15k? Depends. Once
you reach blade-scale, adding another processor is cheap.
Let's say a proper current box like this would be around 5x faster
than a single PIII with a single physical disk drive.
J.
On Thu, 15 Sep 2005 02:00:07 -0700, "itparis"
<itparis@.discussions.microsoft.com> wrote:
>Dear all,
>My company has got a Win 2000 SP3 (PIII 1.2Ghz, 1.5Go RAM) server with SQL
>2000 SP3a installed (around 90 databases in which around 10 are used daily
>not intensively - total data size : 5.5Go - in which 10% is for daily used
>databases). SQL Server is setup to use at most 800Mo of RAM. Transactional
>replication is implemented. The server is setup as editor only (distributor
>and subscribers are on other powerful servers). All is working fine. Networ
k
>load is low (10Mo pikes for I/O). We have got around 20 frequent users on
>this server.
>We are planning to implement two new databases that should represents a
>significant increase in workload (frequent heavy batches processes - 1Go of
>data). Moreover, we need to integrate them in the transactional replication
>process.
>We are wondering if our hardware will be sufficient enough to support this
>added workload. Yet, I've not found any rule to deduce hardware requirement
s
>from databases size and use.
>Could you give me clues for scaling my server, knowing that I have no
>similar test server to make benchmarking? Maybe have you similar systems?
>Thanks a lot,
>Eric.
hardware recommendation
Hi,
I've to recommend a new server for a datawarehouse server which contain some
data warehouse databases (5 databases of 5Gb each)
I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
in raid 0 for the tempdb)
there is 200 users for the system for the moment (not concurrently)
and the usage will be low:
* OLAP Cubes
* Reports with reporting services
* Sharepoint web site (not portal server)
So, I'll plan to use SQL server2000 standard edition with a by processor
licensing
SQL Server will use 3gb of RAM (standard edition is limited)
Analysis services will use 1gb of RAM
and the rest is for sharepoint and reporting services
But I'm not sure about the processor...
what do you think about this?
I know that putting all in 1 server is not good, but we have not the budget
to get 2 SQL Server licenses.
Does another licensing mode can be used in my case?
Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
components on a second biggest server?
what are your recommendations?
thanks.
Jerome.
I would like to give more memory to AS, you may run out of memeory while
processing those cubes.
Based on my experience, disk is always the bottle neck, especially if
you have a big cube and somebody runs a query which misses all the
aggregates, the performance will be horrible. You need to fine tune the
aggregates, also you may want to stree test your environment before
putting it into production.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> Hi,
> I've to recommend a new server for a datawarehouse server which contain some
> data warehouse databases (5 databases of 5Gb each)
> I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
> in raid 0 for the tempdb)
> there is 200 users for the system for the moment (not concurrently)
> and the usage will be low:
> * OLAP Cubes
> * Reports with reporting services
> * Sharepoint web site (not portal server)
> So, I'll plan to use SQL server2000 standard edition with a by processor
> licensing
> SQL Server will use 3gb of RAM (standard edition is limited)
> Analysis services will use 1gb of RAM
> and the rest is for sharepoint and reporting services
> But I'm not sure about the processor...
> what do you think about this?
> I know that putting all in 1 server is not good, but we have not the budget
> to get 2 SQL Server licenses.
> Does another licensing mode can be used in my case?
> Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> components on a second biggest server?
> what are your recommendations?
> thanks.
> Jerome.
>
|||well, finally we have an option to reuse an existing server.
With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
And we'll have 6 small datawarehouses!
I'll reformat this server and reorganize the disks.
I'll add 4gb of memory
We allready have SQL Server 2000 enterprise on this server!
What are your idea about the disk organization?
what do you think about this :
6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
tempdb and windows swap file)
or the 8 disks in Raid 0+1 (strip + mirror) ?
does the AWE option is recommended in this configuration?
because this server will support staging area (6) and datawarehouses (6) =
at least 12 databases
Does it better to create 2 instance of SQL Server? (1 for staging area
database, second for datawarehouse database) and starting the staging
instance only when needed
Does this configuration is better than all in 1 instance?
thanks for your ideas.
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> I would like to give more memory to AS, you may run out of memeory while
> processing those cubes.
> Based on my experience, disk is always the bottle neck, especially if
> you have a big cube and somebody runs a query which misses all the
> aggregates, the performance will be horrible. You need to fine tune the
> aggregates, also you may want to stree test your environment before
> putting it into production.
> --
> Eric Li
> SQL DBA
> MCDBA
>
> Jj wrote:
some[vbcol=seagreen]
gb[vbcol=seagreen]
budget[vbcol=seagreen]
|||It all depends on how big is your DW and staging DB. I would like to
seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
intensive. You should enable AWE to take advantage of extra RAM. If you
create enough aggregates, I think your current disk config. is OK. If
possible, I would like to dedicate some disks exclusively for AS. You
can always rebuild those cubes if neccessary, but again, depends on your
bussiness requirement, you may not able to do this.
Depends on the size of your cubes, you may run into some performance
problem at the beginning, you have to keep fine tuning your aggreates.
Performancewise, I don't think 2 instance will help you much.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> well, finally we have an option to reuse an existing server.
> With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> And we'll have 6 small datawarehouses!
> I'll reformat this server and reorganize the disks.
> I'll add 4gb of memory
> We allready have SQL Server 2000 enterprise on this server!
> What are your idea about the disk organization?
> what do you think about this :
> 6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
> tempdb and windows swap file)
> or the 8 disks in Raid 0+1 (strip + mirror) ?
> does the AWE option is recommended in this configuration?
> because this server will support staging area (6) and datawarehouses (6) =
> at least 12 databases
> Does it better to create 2 instance of SQL Server? (1 for staging area
> database, second for datawarehouse database) and starting the staging
> instance only when needed
> Does this configuration is better than all in 1 instance?
> thanks for your ideas.
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>
> some
>
> gb
>
> budget
>
>
|||What do you think if I dedicate 1 disk for the cubes ?
So...
5 disks in RAID 5 for my databases
2 in Raid 0 for tempdb
1 disk for the cubes
I'll have a lot of Reporting Services reports with some SQL access.
My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
atawarehouses)
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> It all depends on how big is your DW and staging DB. I would like to
> seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
> intensive. You should enable AWE to take advantage of extra RAM. If you
> create enough aggregates, I think your current disk config. is OK. If
> possible, I would like to dedicate some disks exclusively for AS. You
> can always rebuild those cubes if neccessary, but again, depends on your
> bussiness requirement, you may not able to do this.
> Depends on the size of your cubes, you may run into some performance
> problem at the beginning, you have to keep fine tuning your aggreates.
> Performancewise, I don't think 2 instance will help you much.
> --
> Eric Li
> SQL DBA
> MCDBA
> Jj wrote:
(for[vbcol=seagreen]
=[vbcol=seagreen]
processor
>
|||I won't do that, 1 disk for all the cubes is a bad idea.
I suppose your users will create ah-hoc reports, so they will create
reports on the fly, most likely they will miss aggregates, if that
happens, AS will read the whole file. It's just like you join some
tables without using the index, you end up with disk scan. In OLTP
world, you would like to put each individual table on differnet disk,
same idea applies to AS OLAP, but of course it's not practicle. So I
would create RAID 0 and put my cubes there, and update aggregates (usage
based) once in a while. Beware that, if you put alot aggregates, it will
take much longer to process those cubes, and during cube processing,
your CPU will probably shot close to 80%~100%, in other words, your box
is pretty much unaccessable during that time. If your total DW is only
4GB, that's only about 600MB per DW, which is not too bad, assuming you
don't have many concurrent users and they don't do crazy stuff. I used
to manage 50GB cubes and each time a user drill down to the lowest level
and miss the aggregates, it just kill the box.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> What do you think if I dedicate 1 disk for the cubes ?
> So...
> 5 disks in RAID 5 for my databases
> 2 in Raid 0 for tempdb
> 1 disk for the cubes
> I'll have a lot of Reporting Services reports with some SQL access.
> My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
> atawarehouses)
>
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
>
> (for
>
> =
>
> processor
>
>
|||ok, last try ;-)
8 Disks available today...
Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
DAT backup (32Gb available)
Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
+ tempdb
Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
In this case I'm secure (except the cubes, but I can recreate these cubes
through a complete process)
I'm a small performance impact during cube process, but its a weekly
loading. I'm not looking for a perfect process time of my cubes.
Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
the Raid 5)
Later, If we need more power will add some disks to share the workload
regarding the real usage of the server.
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...[vbcol=seagreen]
> I won't do that, 1 disk for all the cubes is a bad idea.
> I suppose your users will create ah-hoc reports, so they will create
> reports on the fly, most likely they will miss aggregates, if that
> happens, AS will read the whole file. It's just like you join some
> tables without using the index, you end up with disk scan. In OLTP
> world, you would like to put each individual table on differnet disk,
> same idea applies to AS OLAP, but of course it's not practicle. So I
> would create RAID 0 and put my cubes there, and update aggregates (usage
> based) once in a while. Beware that, if you put alot aggregates, it will
> take much longer to process those cubes, and during cube processing,
> your CPU will probably shot close to 80%~100%, in other words, your box
> is pretty much unaccessable during that time. If your total DW is only
> 4GB, that's only about 600MB per DW, which is not too bad, assuming you
> don't have many concurrent users and they don't do crazy stuff. I used
> to manage 50GB cubes and each time a user drill down to the lowest level
> and miss the aggregates, it just kill the box.
>
> --
> Eric Li
> SQL DBA
> MCDBA
> Jj wrote:
(25gb of[vbcol=seagreen]
(6)[vbcol=seagreen]
while[vbcol=seagreen]
the[vbcol=seagreen]
contain[vbcol=seagreen]
18[vbcol=seagreen]
other
>
|||This is a very reasonable config. based on the hardware you have. If
it's only a weekly load and your current DW is only about 4 GB, then I
won't worry about reprocessing, unless your fact table is from a very
complicated query, otherwise it shouldn't take too long.
I would schedule a job to archive those cubes after each full process,
it will save your current cube structure and all the aggregate config.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> ok, last try ;-)
> 8 Disks available today...
> Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
> DAT backup (32Gb available)
> Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
> + tempdb
> Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
> In this case I'm secure (except the cubes, but I can recreate these cubes
> through a complete process)
> I'm a small performance impact during cube process, but its a weekly
> loading. I'm not looking for a perfect process time of my cubes.
> Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
> the Raid 5)
> Later, If we need more power will add some disks to share the workload
> regarding the real usage of the server.
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...
>
> (25gb of
>
> (6)
>
> while
>
> the
>
> contain
>
> 18
>
> other
>
>
I've to recommend a new server for a datawarehouse server which contain some
data warehouse databases (5 databases of 5Gb each)
I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
in raid 0 for the tempdb)
there is 200 users for the system for the moment (not concurrently)
and the usage will be low:
* OLAP Cubes
* Reports with reporting services
* Sharepoint web site (not portal server)
So, I'll plan to use SQL server2000 standard edition with a by processor
licensing
SQL Server will use 3gb of RAM (standard edition is limited)
Analysis services will use 1gb of RAM
and the rest is for sharepoint and reporting services
But I'm not sure about the processor...
what do you think about this?
I know that putting all in 1 server is not good, but we have not the budget
to get 2 SQL Server licenses.
Does another licensing mode can be used in my case?
Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
components on a second biggest server?
what are your recommendations?
thanks.
Jerome.
I would like to give more memory to AS, you may run out of memeory while
processing those cubes.
Based on my experience, disk is always the bottle neck, especially if
you have a big cube and somebody runs a query which misses all the
aggregates, the performance will be horrible. You need to fine tune the
aggregates, also you may want to stree test your environment before
putting it into production.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> Hi,
> I've to recommend a new server for a datawarehouse server which contain some
> data warehouse databases (5 databases of 5Gb each)
> I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
> in raid 0 for the tempdb)
> there is 200 users for the system for the moment (not concurrently)
> and the usage will be low:
> * OLAP Cubes
> * Reports with reporting services
> * Sharepoint web site (not portal server)
> So, I'll plan to use SQL server2000 standard edition with a by processor
> licensing
> SQL Server will use 3gb of RAM (standard edition is limited)
> Analysis services will use 1gb of RAM
> and the rest is for sharepoint and reporting services
> But I'm not sure about the processor...
> what do you think about this?
> I know that putting all in 1 server is not good, but we have not the budget
> to get 2 SQL Server licenses.
> Does another licensing mode can be used in my case?
> Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> components on a second biggest server?
> what are your recommendations?
> thanks.
> Jerome.
>
|||well, finally we have an option to reuse an existing server.
With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
And we'll have 6 small datawarehouses!
I'll reformat this server and reorganize the disks.
I'll add 4gb of memory
We allready have SQL Server 2000 enterprise on this server!
What are your idea about the disk organization?
what do you think about this :
6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
tempdb and windows swap file)
or the 8 disks in Raid 0+1 (strip + mirror) ?
does the AWE option is recommended in this configuration?
because this server will support staging area (6) and datawarehouses (6) =
at least 12 databases
Does it better to create 2 instance of SQL Server? (1 for staging area
database, second for datawarehouse database) and starting the staging
instance only when needed
Does this configuration is better than all in 1 instance?
thanks for your ideas.
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> I would like to give more memory to AS, you may run out of memeory while
> processing those cubes.
> Based on my experience, disk is always the bottle neck, especially if
> you have a big cube and somebody runs a query which misses all the
> aggregates, the performance will be horrible. You need to fine tune the
> aggregates, also you may want to stree test your environment before
> putting it into production.
> --
> Eric Li
> SQL DBA
> MCDBA
>
> Jj wrote:
some[vbcol=seagreen]
gb[vbcol=seagreen]
budget[vbcol=seagreen]
|||It all depends on how big is your DW and staging DB. I would like to
seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
intensive. You should enable AWE to take advantage of extra RAM. If you
create enough aggregates, I think your current disk config. is OK. If
possible, I would like to dedicate some disks exclusively for AS. You
can always rebuild those cubes if neccessary, but again, depends on your
bussiness requirement, you may not able to do this.
Depends on the size of your cubes, you may run into some performance
problem at the beginning, you have to keep fine tuning your aggreates.
Performancewise, I don't think 2 instance will help you much.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> well, finally we have an option to reuse an existing server.
> With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> And we'll have 6 small datawarehouses!
> I'll reformat this server and reorganize the disks.
> I'll add 4gb of memory
> We allready have SQL Server 2000 enterprise on this server!
> What are your idea about the disk organization?
> what do you think about this :
> 6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
> tempdb and windows swap file)
> or the 8 disks in Raid 0+1 (strip + mirror) ?
> does the AWE option is recommended in this configuration?
> because this server will support staging area (6) and datawarehouses (6) =
> at least 12 databases
> Does it better to create 2 instance of SQL Server? (1 for staging area
> database, second for datawarehouse database) and starting the staging
> instance only when needed
> Does this configuration is better than all in 1 instance?
> thanks for your ideas.
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>
> some
>
> gb
>
> budget
>
>
|||What do you think if I dedicate 1 disk for the cubes ?
So...
5 disks in RAID 5 for my databases
2 in Raid 0 for tempdb
1 disk for the cubes
I'll have a lot of Reporting Services reports with some SQL access.
My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
atawarehouses)
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> It all depends on how big is your DW and staging DB. I would like to
> seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
> intensive. You should enable AWE to take advantage of extra RAM. If you
> create enough aggregates, I think your current disk config. is OK. If
> possible, I would like to dedicate some disks exclusively for AS. You
> can always rebuild those cubes if neccessary, but again, depends on your
> bussiness requirement, you may not able to do this.
> Depends on the size of your cubes, you may run into some performance
> problem at the beginning, you have to keep fine tuning your aggreates.
> Performancewise, I don't think 2 instance will help you much.
> --
> Eric Li
> SQL DBA
> MCDBA
> Jj wrote:
(for[vbcol=seagreen]
=[vbcol=seagreen]
processor
>
|||I won't do that, 1 disk for all the cubes is a bad idea.
I suppose your users will create ah-hoc reports, so they will create
reports on the fly, most likely they will miss aggregates, if that
happens, AS will read the whole file. It's just like you join some
tables without using the index, you end up with disk scan. In OLTP
world, you would like to put each individual table on differnet disk,
same idea applies to AS OLAP, but of course it's not practicle. So I
would create RAID 0 and put my cubes there, and update aggregates (usage
based) once in a while. Beware that, if you put alot aggregates, it will
take much longer to process those cubes, and during cube processing,
your CPU will probably shot close to 80%~100%, in other words, your box
is pretty much unaccessable during that time. If your total DW is only
4GB, that's only about 600MB per DW, which is not too bad, assuming you
don't have many concurrent users and they don't do crazy stuff. I used
to manage 50GB cubes and each time a user drill down to the lowest level
and miss the aggregates, it just kill the box.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> What do you think if I dedicate 1 disk for the cubes ?
> So...
> 5 disks in RAID 5 for my databases
> 2 in Raid 0 for tempdb
> 1 disk for the cubes
> I'll have a lot of Reporting Services reports with some SQL access.
> My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
> atawarehouses)
>
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
>
> (for
>
> =
>
> processor
>
>
|||ok, last try ;-)
8 Disks available today...
Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
DAT backup (32Gb available)
Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
+ tempdb
Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
In this case I'm secure (except the cubes, but I can recreate these cubes
through a complete process)
I'm a small performance impact during cube process, but its a weekly
loading. I'm not looking for a perfect process time of my cubes.
Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
the Raid 5)
Later, If we need more power will add some disks to share the workload
regarding the real usage of the server.
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...[vbcol=seagreen]
> I won't do that, 1 disk for all the cubes is a bad idea.
> I suppose your users will create ah-hoc reports, so they will create
> reports on the fly, most likely they will miss aggregates, if that
> happens, AS will read the whole file. It's just like you join some
> tables without using the index, you end up with disk scan. In OLTP
> world, you would like to put each individual table on differnet disk,
> same idea applies to AS OLAP, but of course it's not practicle. So I
> would create RAID 0 and put my cubes there, and update aggregates (usage
> based) once in a while. Beware that, if you put alot aggregates, it will
> take much longer to process those cubes, and during cube processing,
> your CPU will probably shot close to 80%~100%, in other words, your box
> is pretty much unaccessable during that time. If your total DW is only
> 4GB, that's only about 600MB per DW, which is not too bad, assuming you
> don't have many concurrent users and they don't do crazy stuff. I used
> to manage 50GB cubes and each time a user drill down to the lowest level
> and miss the aggregates, it just kill the box.
>
> --
> Eric Li
> SQL DBA
> MCDBA
> Jj wrote:
(25gb of[vbcol=seagreen]
(6)[vbcol=seagreen]
while[vbcol=seagreen]
the[vbcol=seagreen]
contain[vbcol=seagreen]
18[vbcol=seagreen]
other
>
|||This is a very reasonable config. based on the hardware you have. If
it's only a weekly load and your current DW is only about 4 GB, then I
won't worry about reprocessing, unless your fact table is from a very
complicated query, otherwise it shouldn't take too long.
I would schedule a job to archive those cubes after each full process,
it will save your current cube structure and all the aggregate config.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> ok, last try ;-)
> 8 Disks available today...
> Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
> DAT backup (32Gb available)
> Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
> + tempdb
> Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
> In this case I'm secure (except the cubes, but I can recreate these cubes
> through a complete process)
> I'm a small performance impact during cube process, but its a weekly
> loading. I'm not looking for a perfect process time of my cubes.
> Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
> the Raid 5)
> Later, If we need more power will add some disks to share the workload
> regarding the real usage of the server.
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...
>
> (25gb of
>
> (6)
>
> while
>
> the
>
> contain
>
> 18
>
> other
>
>
hardware recommendation
Hi,
I've to recommend a new server for a datawarehouse server which contain some
data warehouse databases (5 databases of 5Gb each)
I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
in raid 0 for the tempdb)
there is 200 users for the system for the moment (not concurrently)
and the usage will be low:
* OLAP Cubes
* Reports with reporting services
* Sharepoint web site (not portal server)
So, I'll plan to use SQL server2000 standard edition with a by processor
licensing
SQL Server will use 3gb of RAM (standard edition is limited)
Analysis services will use 1gb of RAM
and the rest is for sharepoint and reporting services
But I'm not sure about the processor...
what do you think about this?
I know that putting all in 1 server is not good, but we have not the budget
to get 2 SQL Server licenses.
Does another licensing mode can be used in my case?
Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
components on a second biggest server?
what are your recommendations?
thanks.
Jerome.
I would like to give more memory to AS, you may run out of memeory while
processing those cubes.
Based on my experience, disk is always the bottle neck, especially if
you have a big cube and somebody runs a query which misses all the
aggregates, the performance will be horrible. You need to fine tune the
aggregates, also you may want to stree test your environment before
putting it into production.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> Hi,
> I've to recommend a new server for a datawarehouse server which contain some
> data warehouse databases (5 databases of 5Gb each)
> I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
> in raid 0 for the tempdb)
> there is 200 users for the system for the moment (not concurrently)
> and the usage will be low:
> * OLAP Cubes
> * Reports with reporting services
> * Sharepoint web site (not portal server)
> So, I'll plan to use SQL server2000 standard edition with a by processor
> licensing
> SQL Server will use 3gb of RAM (standard edition is limited)
> Analysis services will use 1gb of RAM
> and the rest is for sharepoint and reporting services
> But I'm not sure about the processor...
> what do you think about this?
> I know that putting all in 1 server is not good, but we have not the budget
> to get 2 SQL Server licenses.
> Does another licensing mode can be used in my case?
> Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> components on a second biggest server?
> what are your recommendations?
> thanks.
> Jerome.
>
|||well, finally we have an option to reuse an existing server.
With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
And we'll have 6 small datawarehouses!
I'll reformat this server and reorganize the disks.
I'll add 4gb of memory
We allready have SQL Server 2000 enterprise on this server!
What are your idea about the disk organization?
what do you think about this :
6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
tempdb and windows swap file)
or the 8 disks in Raid 0+1 (strip + mirror) ?
does the AWE option is recommended in this configuration?
because this server will support staging area (6) and datawarehouses (6) =
at least 12 databases
Does it better to create 2 instance of SQL Server? (1 for staging area
database, second for datawarehouse database) and starting the staging
instance only when needed
Does this configuration is better than all in 1 instance?
thanks for your ideas.
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> I would like to give more memory to AS, you may run out of memeory while
> processing those cubes.
> Based on my experience, disk is always the bottle neck, especially if
> you have a big cube and somebody runs a query which misses all the
> aggregates, the performance will be horrible. You need to fine tune the
> aggregates, also you may want to stree test your environment before
> putting it into production.
> --
> Eric Li
> SQL DBA
> MCDBA
>
> Jj wrote:
some[vbcol=seagreen]
gb[vbcol=seagreen]
budget[vbcol=seagreen]
|||It all depends on how big is your DW and staging DB. I would like to
seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
intensive. You should enable AWE to take advantage of extra RAM. If you
create enough aggregates, I think your current disk config. is OK. If
possible, I would like to dedicate some disks exclusively for AS. You
can always rebuild those cubes if neccessary, but again, depends on your
bussiness requirement, you may not able to do this.
Depends on the size of your cubes, you may run into some performance
problem at the beginning, you have to keep fine tuning your aggreates.
Performancewise, I don't think 2 instance will help you much.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> well, finally we have an option to reuse an existing server.
> With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> And we'll have 6 small datawarehouses!
> I'll reformat this server and reorganize the disks.
> I'll add 4gb of memory
> We allready have SQL Server 2000 enterprise on this server!
> What are your idea about the disk organization?
> what do you think about this :
> 6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
> tempdb and windows swap file)
> or the 8 disks in Raid 0+1 (strip + mirror) ?
> does the AWE option is recommended in this configuration?
> because this server will support staging area (6) and datawarehouses (6) =
> at least 12 databases
> Does it better to create 2 instance of SQL Server? (1 for staging area
> database, second for datawarehouse database) and starting the staging
> instance only when needed
> Does this configuration is better than all in 1 instance?
> thanks for your ideas.
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>
> some
>
> gb
>
> budget
>
>
|||What do you think if I dedicate 1 disk for the cubes ?
So...
5 disks in RAID 5 for my databases
2 in Raid 0 for tempdb
1 disk for the cubes
I'll have a lot of Reporting Services reports with some SQL access.
My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
atawarehouses)
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> It all depends on how big is your DW and staging DB. I would like to
> seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
> intensive. You should enable AWE to take advantage of extra RAM. If you
> create enough aggregates, I think your current disk config. is OK. If
> possible, I would like to dedicate some disks exclusively for AS. You
> can always rebuild those cubes if neccessary, but again, depends on your
> bussiness requirement, you may not able to do this.
> Depends on the size of your cubes, you may run into some performance
> problem at the beginning, you have to keep fine tuning your aggreates.
> Performancewise, I don't think 2 instance will help you much.
> --
> Eric Li
> SQL DBA
> MCDBA
> Jj wrote:
(for[vbcol=seagreen]
=[vbcol=seagreen]
processor
>
|||I won't do that, 1 disk for all the cubes is a bad idea.
I suppose your users will create ah-hoc reports, so they will create
reports on the fly, most likely they will miss aggregates, if that
happens, AS will read the whole file. It's just like you join some
tables without using the index, you end up with disk scan. In OLTP
world, you would like to put each individual table on differnet disk,
same idea applies to AS OLAP, but of course it's not practicle. So I
would create RAID 0 and put my cubes there, and update aggregates (usage
based) once in a while. Beware that, if you put alot aggregates, it will
take much longer to process those cubes, and during cube processing,
your CPU will probably shot close to 80%~100%, in other words, your box
is pretty much unaccessable during that time. If your total DW is only
4GB, that's only about 600MB per DW, which is not too bad, assuming you
don't have many concurrent users and they don't do crazy stuff. I used
to manage 50GB cubes and each time a user drill down to the lowest level
and miss the aggregates, it just kill the box.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> What do you think if I dedicate 1 disk for the cubes ?
> So...
> 5 disks in RAID 5 for my databases
> 2 in Raid 0 for tempdb
> 1 disk for the cubes
> I'll have a lot of Reporting Services reports with some SQL access.
> My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
> atawarehouses)
>
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
>
> (for
>
> =
>
> processor
>
>
|||ok, last try ;-)
8 Disks available today...
Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
DAT backup (32Gb available)
Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
+ tempdb
Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
In this case I'm secure (except the cubes, but I can recreate these cubes
through a complete process)
I'm a small performance impact during cube process, but its a weekly
loading. I'm not looking for a perfect process time of my cubes.
Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
the Raid 5)
Later, If we need more power will add some disks to share the workload
regarding the real usage of the server.
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...[vbcol=seagreen]
> I won't do that, 1 disk for all the cubes is a bad idea.
> I suppose your users will create ah-hoc reports, so they will create
> reports on the fly, most likely they will miss aggregates, if that
> happens, AS will read the whole file. It's just like you join some
> tables without using the index, you end up with disk scan. In OLTP
> world, you would like to put each individual table on differnet disk,
> same idea applies to AS OLAP, but of course it's not practicle. So I
> would create RAID 0 and put my cubes there, and update aggregates (usage
> based) once in a while. Beware that, if you put alot aggregates, it will
> take much longer to process those cubes, and during cube processing,
> your CPU will probably shot close to 80%~100%, in other words, your box
> is pretty much unaccessable during that time. If your total DW is only
> 4GB, that's only about 600MB per DW, which is not too bad, assuming you
> don't have many concurrent users and they don't do crazy stuff. I used
> to manage 50GB cubes and each time a user drill down to the lowest level
> and miss the aggregates, it just kill the box.
>
> --
> Eric Li
> SQL DBA
> MCDBA
> Jj wrote:
(25gb of[vbcol=seagreen]
(6)[vbcol=seagreen]
while[vbcol=seagreen]
the[vbcol=seagreen]
contain[vbcol=seagreen]
18[vbcol=seagreen]
other
>
|||This is a very reasonable config. based on the hardware you have. If
it's only a weekly load and your current DW is only about 4 GB, then I
won't worry about reprocessing, unless your fact table is from a very
complicated query, otherwise it shouldn't take too long.
I would schedule a job to archive those cubes after each full process,
it will save your current cube structure and all the aggregate config.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> ok, last try ;-)
> 8 Disks available today...
> Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
> DAT backup (32Gb available)
> Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
> + tempdb
> Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
> In this case I'm secure (except the cubes, but I can recreate these cubes
> through a complete process)
> I'm a small performance impact during cube process, but its a weekly
> loading. I'm not looking for a perfect process time of my cubes.
> Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
> the Raid 5)
> Later, If we need more power will add some disks to share the workload
> regarding the real usage of the server.
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...
>
> (25gb of
>
> (6)
>
> while
>
> the
>
> contain
>
> 18
>
> other
>
>
I've to recommend a new server for a datawarehouse server which contain some
data warehouse databases (5 databases of 5Gb each)
I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
in raid 0 for the tempdb)
there is 200 users for the system for the moment (not concurrently)
and the usage will be low:
* OLAP Cubes
* Reports with reporting services
* Sharepoint web site (not portal server)
So, I'll plan to use SQL server2000 standard edition with a by processor
licensing
SQL Server will use 3gb of RAM (standard edition is limited)
Analysis services will use 1gb of RAM
and the rest is for sharepoint and reporting services
But I'm not sure about the processor...
what do you think about this?
I know that putting all in 1 server is not good, but we have not the budget
to get 2 SQL Server licenses.
Does another licensing mode can be used in my case?
Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
components on a second biggest server?
what are your recommendations?
thanks.
Jerome.
I would like to give more memory to AS, you may run out of memeory while
processing those cubes.
Based on my experience, disk is always the bottle neck, especially if
you have a big cube and somebody runs a query which misses all the
aggregates, the performance will be horrible. You need to fine tune the
aggregates, also you may want to stree test your environment before
putting it into production.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> Hi,
> I've to recommend a new server for a datawarehouse server which contain some
> data warehouse databases (5 databases of 5Gb each)
> I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
> in raid 0 for the tempdb)
> there is 200 users for the system for the moment (not concurrently)
> and the usage will be low:
> * OLAP Cubes
> * Reports with reporting services
> * Sharepoint web site (not portal server)
> So, I'll plan to use SQL server2000 standard edition with a by processor
> licensing
> SQL Server will use 3gb of RAM (standard edition is limited)
> Analysis services will use 1gb of RAM
> and the rest is for sharepoint and reporting services
> But I'm not sure about the processor...
> what do you think about this?
> I know that putting all in 1 server is not good, but we have not the budget
> to get 2 SQL Server licenses.
> Does another licensing mode can be used in my case?
> Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> components on a second biggest server?
> what are your recommendations?
> thanks.
> Jerome.
>
|||well, finally we have an option to reuse an existing server.
With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
And we'll have 6 small datawarehouses!
I'll reformat this server and reorganize the disks.
I'll add 4gb of memory
We allready have SQL Server 2000 enterprise on this server!
What are your idea about the disk organization?
what do you think about this :
6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
tempdb and windows swap file)
or the 8 disks in Raid 0+1 (strip + mirror) ?
does the AWE option is recommended in this configuration?
because this server will support staging area (6) and datawarehouses (6) =
at least 12 databases
Does it better to create 2 instance of SQL Server? (1 for staging area
database, second for datawarehouse database) and starting the staging
instance only when needed
Does this configuration is better than all in 1 instance?
thanks for your ideas.
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> I would like to give more memory to AS, you may run out of memeory while
> processing those cubes.
> Based on my experience, disk is always the bottle neck, especially if
> you have a big cube and somebody runs a query which misses all the
> aggregates, the performance will be horrible. You need to fine tune the
> aggregates, also you may want to stree test your environment before
> putting it into production.
> --
> Eric Li
> SQL DBA
> MCDBA
>
> Jj wrote:
some[vbcol=seagreen]
gb[vbcol=seagreen]
budget[vbcol=seagreen]
|||It all depends on how big is your DW and staging DB. I would like to
seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
intensive. You should enable AWE to take advantage of extra RAM. If you
create enough aggregates, I think your current disk config. is OK. If
possible, I would like to dedicate some disks exclusively for AS. You
can always rebuild those cubes if neccessary, but again, depends on your
bussiness requirement, you may not able to do this.
Depends on the size of your cubes, you may run into some performance
problem at the beginning, you have to keep fine tuning your aggreates.
Performancewise, I don't think 2 instance will help you much.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> well, finally we have an option to reuse an existing server.
> With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> And we'll have 6 small datawarehouses!
> I'll reformat this server and reorganize the disks.
> I'll add 4gb of memory
> We allready have SQL Server 2000 enterprise on this server!
> What are your idea about the disk organization?
> what do you think about this :
> 6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
> tempdb and windows swap file)
> or the 8 disks in Raid 0+1 (strip + mirror) ?
> does the AWE option is recommended in this configuration?
> because this server will support staging area (6) and datawarehouses (6) =
> at least 12 databases
> Does it better to create 2 instance of SQL Server? (1 for staging area
> database, second for datawarehouse database) and starting the staging
> instance only when needed
> Does this configuration is better than all in 1 instance?
> thanks for your ideas.
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>
> some
>
> gb
>
> budget
>
>
|||What do you think if I dedicate 1 disk for the cubes ?
So...
5 disks in RAID 5 for my databases
2 in Raid 0 for tempdb
1 disk for the cubes
I'll have a lot of Reporting Services reports with some SQL access.
My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
atawarehouses)
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> It all depends on how big is your DW and staging DB. I would like to
> seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
> intensive. You should enable AWE to take advantage of extra RAM. If you
> create enough aggregates, I think your current disk config. is OK. If
> possible, I would like to dedicate some disks exclusively for AS. You
> can always rebuild those cubes if neccessary, but again, depends on your
> bussiness requirement, you may not able to do this.
> Depends on the size of your cubes, you may run into some performance
> problem at the beginning, you have to keep fine tuning your aggreates.
> Performancewise, I don't think 2 instance will help you much.
> --
> Eric Li
> SQL DBA
> MCDBA
> Jj wrote:
(for[vbcol=seagreen]
=[vbcol=seagreen]
processor
>
|||I won't do that, 1 disk for all the cubes is a bad idea.
I suppose your users will create ah-hoc reports, so they will create
reports on the fly, most likely they will miss aggregates, if that
happens, AS will read the whole file. It's just like you join some
tables without using the index, you end up with disk scan. In OLTP
world, you would like to put each individual table on differnet disk,
same idea applies to AS OLAP, but of course it's not practicle. So I
would create RAID 0 and put my cubes there, and update aggregates (usage
based) once in a while. Beware that, if you put alot aggregates, it will
take much longer to process those cubes, and during cube processing,
your CPU will probably shot close to 80%~100%, in other words, your box
is pretty much unaccessable during that time. If your total DW is only
4GB, that's only about 600MB per DW, which is not too bad, assuming you
don't have many concurrent users and they don't do crazy stuff. I used
to manage 50GB cubes and each time a user drill down to the lowest level
and miss the aggregates, it just kill the box.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> What do you think if I dedicate 1 disk for the cubes ?
> So...
> 5 disks in RAID 5 for my databases
> 2 in Raid 0 for tempdb
> 1 disk for the cubes
> I'll have a lot of Reporting Services reports with some SQL access.
> My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
> atawarehouses)
>
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
>
> (for
>
> =
>
> processor
>
>
|||ok, last try ;-)
8 Disks available today...
Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
DAT backup (32Gb available)
Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
+ tempdb
Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
In this case I'm secure (except the cubes, but I can recreate these cubes
through a complete process)
I'm a small performance impact during cube process, but its a weekly
loading. I'm not looking for a perfect process time of my cubes.
Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
the Raid 5)
Later, If we need more power will add some disks to share the workload
regarding the real usage of the server.
"Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...[vbcol=seagreen]
> I won't do that, 1 disk for all the cubes is a bad idea.
> I suppose your users will create ah-hoc reports, so they will create
> reports on the fly, most likely they will miss aggregates, if that
> happens, AS will read the whole file. It's just like you join some
> tables without using the index, you end up with disk scan. In OLTP
> world, you would like to put each individual table on differnet disk,
> same idea applies to AS OLAP, but of course it's not practicle. So I
> would create RAID 0 and put my cubes there, and update aggregates (usage
> based) once in a while. Beware that, if you put alot aggregates, it will
> take much longer to process those cubes, and during cube processing,
> your CPU will probably shot close to 80%~100%, in other words, your box
> is pretty much unaccessable during that time. If your total DW is only
> 4GB, that's only about 600MB per DW, which is not too bad, assuming you
> don't have many concurrent users and they don't do crazy stuff. I used
> to manage 50GB cubes and each time a user drill down to the lowest level
> and miss the aggregates, it just kill the box.
>
> --
> Eric Li
> SQL DBA
> MCDBA
> Jj wrote:
(25gb of[vbcol=seagreen]
(6)[vbcol=seagreen]
while[vbcol=seagreen]
the[vbcol=seagreen]
contain[vbcol=seagreen]
18[vbcol=seagreen]
other
>
|||This is a very reasonable config. based on the hardware you have. If
it's only a weekly load and your current DW is only about 4 GB, then I
won't worry about reprocessing, unless your fact table is from a very
complicated query, otherwise it shouldn't take too long.
I would schedule a job to archive those cubes after each full process,
it will save your current cube structure and all the aggregate config.
Eric Li
SQL DBA
MCDBA
Jj wrote:
> ok, last try ;-)
> 8 Disks available today...
> Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
> DAT backup (32Gb available)
> Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
> + tempdb
> Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
> In this case I'm secure (except the cubes, but I can recreate these cubes
> through a complete process)
> I'm a small performance impact during cube process, but its a weekly
> loading. I'm not looking for a perfect process time of my cubes.
> Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
> the Raid 5)
> Later, If we need more power will add some disks to share the workload
> regarding the real usage of the server.
> "Eric.Li" <anonymous@.microsoftnews.org> a crit dans le message de
> news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...
>
> (25gb of
>
> (6)
>
> while
>
> the
>
> contain
>
> 18
>
> other
>
>
hardware recommendation
Hi,
I've to recommend a new server for a datawarehouse server which contain some
data warehouse databases (5 databases of 5Gb each)
I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
in raid 0 for the tempdb)
there is 200 users for the system for the moment (not concurrently)
and the usage will be low:
* OLAP Cubes
* Reports with reporting services
* Sharepoint web site (not portal server)
So, I'll plan to use SQL server2000 standard edition with a by processor
licensing
SQL Server will use 3gb of RAM (standard edition is limited)
Analysis services will use 1gb of RAM
and the rest is for sharepoint and reporting services
But I'm not sure about the processor...
what do you think about this?
I know that putting all in 1 server is not good, but we have not the budget
to get 2 SQL Server licenses.
Does another licensing mode can be used in my case?
Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
components on a second biggest server?
what are your recommendations?
thanks.
Jerome.I would like to give more memory to AS, you may run out of memeory while
processing those cubes.
Based on my experience, disk is always the bottle neck, especially if
you have a big cube and somebody runs a query which misses all the
aggregates, the performance will be horrible. You need to fine tune the
aggregates, also you may want to stree test your environment before
putting it into production.
--
Eric Li
SQL DBA
MCDBA
Jéjé wrote:
> Hi,
> I've to recommend a new server for a datawarehouse server which contain some
> data warehouse databases (5 databases of 5Gb each)
> I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
> in raid 0 for the tempdb)
> there is 200 users for the system for the moment (not concurrently)
> and the usage will be low:
> * OLAP Cubes
> * Reports with reporting services
> * Sharepoint web site (not portal server)
> So, I'll plan to use SQL server2000 standard edition with a by processor
> licensing
> SQL Server will use 3gb of RAM (standard edition is limited)
> Analysis services will use 1gb of RAM
> and the rest is for sharepoint and reporting services
> But I'm not sure about the processor...
> what do you think about this?
> I know that putting all in 1 server is not good, but we have not the budget
> to get 2 SQL Server licenses.
> Does another licensing mode can be used in my case?
> Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> components on a second biggest server?
> what are your recommendations?
> thanks.
> Jerome.
>|||well, finally we have an option to reuse an existing server.
With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
And we'll have 6 small datawarehouses!
I'll reformat this server and reorganize the disks.
I'll add 4gb of memory
We allready have SQL Server 2000 enterprise on this server!
What are your idea about the disk organization?
what do you think about this :
6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
tempdb and windows swap file)
or the 8 disks in Raid 0+1 (strip + mirror) ?
does the AWE option is recommended in this configuration?
because this server will support staging area (6) and datawarehouses (6) =at least 12 databases
Does it better to create 2 instance of SQL Server? (1 for staging area
database, second for datawarehouse database) and starting the staging
instance only when needed
Does this configuration is better than all in 1 instance?
thanks for your ideas.
"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
> I would like to give more memory to AS, you may run out of memeory while
> processing those cubes.
> Based on my experience, disk is always the bottle neck, especially if
> you have a big cube and somebody runs a query which misses all the
> aggregates, the performance will be horrible. You need to fine tune the
> aggregates, also you may want to stree test your environment before
> putting it into production.
> --
> Eric Li
> SQL DBA
> MCDBA
>
> Jéjé wrote:
> > Hi,
> >
> > I've to recommend a new server for a datawarehouse server which contain
some
> > data warehouse databases (5 databases of 5Gb each)
> >
> > I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18
gb
> > in raid 0 for the tempdb)
> >
> > there is 200 users for the system for the moment (not concurrently)
> > and the usage will be low:
> > * OLAP Cubes
> > * Reports with reporting services
> > * Sharepoint web site (not portal server)
> >
> > So, I'll plan to use SQL server2000 standard edition with a by processor
> > licensing
> > SQL Server will use 3gb of RAM (standard edition is limited)
> > Analysis services will use 1gb of RAM
> > and the rest is for sharepoint and reporting services
> >
> > But I'm not sure about the processor...
> >
> > what do you think about this?
> >
> > I know that putting all in 1 server is not good, but we have not the
budget
> > to get 2 SQL Server licenses.
> > Does another licensing mode can be used in my case?
> >
> > Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> > components on a second biggest server?
> >
> > what are your recommendations?
> >
> > thanks.
> >
> > Jerome.
> >
> >|||It all depends on how big is your DW and staging DB. I would like to
seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
intensive. You should enable AWE to take advantage of extra RAM. If you
create enough aggregates, I think your current disk config. is OK. If
possible, I would like to dedicate some disks exclusively for AS. You
can always rebuild those cubes if neccessary, but again, depends on your
bussiness requirement, you may not able to do this.
Depends on the size of your cubes, you may run into some performance
problem at the beginning, you have to keep fine tuning your aggreates.
Performancewise, I don't think 2 instance will help you much.
--
Eric Li
SQL DBA
MCDBA
Jéjé wrote:
> well, finally we have an option to reuse an existing server.
> With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> And we'll have 6 small datawarehouses!
> I'll reformat this server and reorganize the disks.
> I'll add 4gb of memory
> We allready have SQL Server 2000 enterprise on this server!
> What are your idea about the disk organization?
> what do you think about this :
> 6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
> tempdb and windows swap file)
> or the 8 disks in Raid 0+1 (strip + mirror) ?
> does the AWE option is recommended in this configuration?
> because this server will support staging area (6) and datawarehouses (6) => at least 12 databases
> Does it better to create 2 instance of SQL Server? (1 for staging area
> database, second for datawarehouse database) and starting the staging
> instance only when needed
> Does this configuration is better than all in 1 instance?
> thanks for your ideas.
> "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>>I would like to give more memory to AS, you may run out of memeory while
>>processing those cubes.
>>Based on my experience, disk is always the bottle neck, especially if
>>you have a big cube and somebody runs a query which misses all the
>>aggregates, the performance will be horrible. You need to fine tune the
>>aggregates, also you may want to stree test your environment before
>>putting it into production.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>
>>Jéjé wrote:
>>Hi,
>>I've to recommend a new server for a datawarehouse server which contain
> some
>>data warehouse databases (5 databases of 5Gb each)
>>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18
> gb
>>in raid 0 for the tempdb)
>>there is 200 users for the system for the moment (not concurrently)
>>and the usage will be low:
>>* OLAP Cubes
>>* Reports with reporting services
>>* Sharepoint web site (not portal server)
>>So, I'll plan to use SQL server2000 standard edition with a by processor
>>licensing
>>SQL Server will use 3gb of RAM (standard edition is limited)
>>Analysis services will use 1gb of RAM
>>and the rest is for sharepoint and reporting services
>>But I'm not sure about the processor...
>>what do you think about this?
>>I know that putting all in 1 server is not good, but we have not the
> budget
>>to get 2 SQL Server licenses.
>>Does another licensing mode can be used in my case?
>>Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
>>components on a second biggest server?
>>what are your recommendations?
>>thanks.
>>Jerome.
>>
>
>|||What do you think if I dedicate 1 disk for the cubes ?
So...
5 disks in RAID 5 for my databases
2 in Raid 0 for tempdb
1 disk for the cubes
I'll have a lot of Reporting Services reports with some SQL access.
My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
atawarehouses)
"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
> It all depends on how big is your DW and staging DB. I would like to
> seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
> intensive. You should enable AWE to take advantage of extra RAM. If you
> create enough aggregates, I think your current disk config. is OK. If
> possible, I would like to dedicate some disks exclusively for AS. You
> can always rebuild those cubes if neccessary, but again, depends on your
> bussiness requirement, you may not able to do this.
> Depends on the size of your cubes, you may run into some performance
> problem at the beginning, you have to keep fine tuning your aggreates.
> Performancewise, I don't think 2 instance will help you much.
> --
> Eric Li
> SQL DBA
> MCDBA
> Jéjé wrote:
> > well, finally we have an option to reuse an existing server.
> > With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> > And we'll have 6 small datawarehouses!
> >
> > I'll reformat this server and reorganize the disks.
> > I'll add 4gb of memory
> >
> > We allready have SQL Server 2000 enterprise on this server!
> >
> > What are your idea about the disk organization?
> > what do you think about this :
> > 6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip)
(for
> > tempdb and windows swap file)
> >
> > or the 8 disks in Raid 0+1 (strip + mirror) ?
> >
> > does the AWE option is recommended in this configuration?
> >
> > because this server will support staging area (6) and datawarehouses (6)
=> > at least 12 databases
> > Does it better to create 2 instance of SQL Server? (1 for staging area
> > database, second for datawarehouse database) and starting the staging
> > instance only when needed
> > Does this configuration is better than all in 1 instance?
> >
> > thanks for your ideas.
> >
> > "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> > news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
> >
> >>I would like to give more memory to AS, you may run out of memeory while
> >>processing those cubes.
> >>
> >>Based on my experience, disk is always the bottle neck, especially if
> >>you have a big cube and somebody runs a query which misses all the
> >>aggregates, the performance will be horrible. You need to fine tune the
> >>aggregates, also you may want to stree test your environment before
> >>putting it into production.
> >>
> >>--
> >>Eric Li
> >>SQL DBA
> >>MCDBA
> >>
> >>
> >>Jéjé wrote:
> >>
> >>Hi,
> >>
> >>I've to recommend a new server for a datawarehouse server which contain
> >
> > some
> >
> >>data warehouse databases (5 databases of 5Gb each)
> >>
> >>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18
> >
> > gb
> >
> >>in raid 0 for the tempdb)
> >>
> >>there is 200 users for the system for the moment (not concurrently)
> >>and the usage will be low:
> >>* OLAP Cubes
> >>* Reports with reporting services
> >>* Sharepoint web site (not portal server)
> >>
> >>So, I'll plan to use SQL server2000 standard edition with a by
processor
> >>licensing
> >>SQL Server will use 3gb of RAM (standard edition is limited)
> >>Analysis services will use 1gb of RAM
> >>and the rest is for sharepoint and reporting services
> >>
> >>But I'm not sure about the processor...
> >>
> >>what do you think about this?
> >>
> >>I know that putting all in 1 server is not good, but we have not the
> >
> > budget
> >
> >>to get 2 SQL Server licenses.
> >>Does another licensing mode can be used in my case?
> >>
> >>Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> >>components on a second biggest server?
> >>
> >>what are your recommendations?
> >>
> >>thanks.
> >>
> >>Jerome.
> >>
> >>
> >
> >
> >
>|||I won't do that, 1 disk for all the cubes is a bad idea.
I suppose your users will create ah-hoc reports, so they will create
reports on the fly, most likely they will miss aggregates, if that
happens, AS will read the whole file. It's just like you join some
tables without using the index, you end up with disk scan. In OLTP
world, you would like to put each individual table on differnet disk,
same idea applies to AS OLAP, but of course it's not practicle. So I
would create RAID 0 and put my cubes there, and update aggregates (usage
based) once in a while. Beware that, if you put alot aggregates, it will
take much longer to process those cubes, and during cube processing,
your CPU will probably shot close to 80%~100%, in other words, your box
is pretty much unaccessable during that time. If your total DW is only
4GB, that's only about 600MB per DW, which is not too bad, assuming you
don't have many concurrent users and they don't do crazy stuff. I used
to manage 50GB cubes and each time a user drill down to the lowest level
and miss the aggregates, it just kill the box.
Eric Li
SQL DBA
MCDBA
Jéjé wrote:
> What do you think if I dedicate 1 disk for the cubes ?
> So...
> 5 disks in RAID 5 for my databases
> 2 in Raid 0 for tempdb
> 1 disk for the cubes
> I'll have a lot of Reporting Services reports with some SQL access.
> My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
> atawarehouses)
>
> "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
>>It all depends on how big is your DW and staging DB. I would like to
>>seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
>>intensive. You should enable AWE to take advantage of extra RAM. If you
>>create enough aggregates, I think your current disk config. is OK. If
>>possible, I would like to dedicate some disks exclusively for AS. You
>>can always rebuild those cubes if neccessary, but again, depends on your
>> bussiness requirement, you may not able to do this.
>>Depends on the size of your cubes, you may run into some performance
>>problem at the beginning, you have to keep fine tuning your aggreates.
>>Performancewise, I don't think 2 instance will help you much.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>Jéjé wrote:
>>
>>well, finally we have an option to reuse an existing server.
>>With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
>>And we'll have 6 small datawarehouses!
>>I'll reformat this server and reorganize the disks.
>>I'll add 4gb of memory
>>We allready have SQL Server 2000 enterprise on this server!
>>What are your idea about the disk organization?
>>what do you think about this :
>>6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip)
> (for
>>tempdb and windows swap file)
>>or the 8 disks in Raid 0+1 (strip + mirror) ?
>>does the AWE option is recommended in this configuration?
>>because this server will support staging area (6) and datawarehouses (6)
> =>>at least 12 databases
>>Does it better to create 2 instance of SQL Server? (1 for staging area
>>database, second for datawarehouse database) and starting the staging
>>instance only when needed
>>Does this configuration is better than all in 1 instance?
>>thanks for your ideas.
>>"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
>>news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>>
>>I would like to give more memory to AS, you may run out of memeory while
>>processing those cubes.
>>Based on my experience, disk is always the bottle neck, especially if
>>you have a big cube and somebody runs a query which misses all the
>>aggregates, the performance will be horrible. You need to fine tune the
>>aggregates, also you may want to stree test your environment before
>>putting it into production.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>
>>Jéjé wrote:
>>
>>Hi,
>>I've to recommend a new server for a datawarehouse server which contain
>>some
>>
>>data warehouse databases (5 databases of 5Gb each)
>>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18
>>gb
>>
>>in raid 0 for the tempdb)
>>there is 200 users for the system for the moment (not concurrently)
>>and the usage will be low:
>>* OLAP Cubes
>>* Reports with reporting services
>>* Sharepoint web site (not portal server)
>>So, I'll plan to use SQL server2000 standard edition with a by
> processor
>>licensing
>>SQL Server will use 3gb of RAM (standard edition is limited)
>>Analysis services will use 1gb of RAM
>>and the rest is for sharepoint and reporting services
>>But I'm not sure about the processor...
>>what do you think about this?
>>I know that putting all in 1 server is not good, but we have not the
>>budget
>>
>>to get 2 SQL Server licenses.
>>Does another licensing mode can be used in my case?
>>Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
>>components on a second biggest server?
>>what are your recommendations?
>>thanks.
>>Jerome.
>>
>>
>|||ok, last try ;-)
8 Disks available today...
Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
DAT backup (32Gb available)
Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
+ tempdb
Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
In this case I'm secure (except the cubes, but I can recreate these cubes
through a complete process)
I'm a small performance impact during cube process, but its a weekly
loading. I'm not looking for a perfect process time of my cubes.
Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
the Raid 5)
Later, If we need more power will add some disks to share the workload
regarding the real usage of the server.
"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...
> I won't do that, 1 disk for all the cubes is a bad idea.
> I suppose your users will create ah-hoc reports, so they will create
> reports on the fly, most likely they will miss aggregates, if that
> happens, AS will read the whole file. It's just like you join some
> tables without using the index, you end up with disk scan. In OLTP
> world, you would like to put each individual table on differnet disk,
> same idea applies to AS OLAP, but of course it's not practicle. So I
> would create RAID 0 and put my cubes there, and update aggregates (usage
> based) once in a while. Beware that, if you put alot aggregates, it will
> take much longer to process those cubes, and during cube processing,
> your CPU will probably shot close to 80%~100%, in other words, your box
> is pretty much unaccessable during that time. If your total DW is only
> 4GB, that's only about 600MB per DW, which is not too bad, assuming you
> don't have many concurrent users and they don't do crazy stuff. I used
> to manage 50GB cubes and each time a user drill down to the lowest level
> and miss the aggregates, it just kill the box.
>
> --
> Eric Li
> SQL DBA
> MCDBA
> Jéjé wrote:
> > What do you think if I dedicate 1 disk for the cubes ?
> > So...
> > 5 disks in RAID 5 for my databases
> > 2 in Raid 0 for tempdb
> > 1 disk for the cubes
> >
> > I'll have a lot of Reporting Services reports with some SQL access.
> > My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes.
(25gb of
> > atawarehouses)
> >
> >
> > "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> > news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
> >
> >>It all depends on how big is your DW and staging DB. I would like to
> >>seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
> >>intensive. You should enable AWE to take advantage of extra RAM. If you
> >>create enough aggregates, I think your current disk config. is OK. If
> >>possible, I would like to dedicate some disks exclusively for AS. You
> >>can always rebuild those cubes if neccessary, but again, depends on your
> >> bussiness requirement, you may not able to do this.
> >>
> >>Depends on the size of your cubes, you may run into some performance
> >>problem at the beginning, you have to keep fine tuning your aggreates.
> >>
> >>Performancewise, I don't think 2 instance will help you much.
> >>
> >>--
> >>Eric Li
> >>SQL DBA
> >>MCDBA
> >>
> >>Jéjé wrote:
> >>
> >>
> >>well, finally we have an option to reuse an existing server.
> >>With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> >>And we'll have 6 small datawarehouses!
> >>
> >>I'll reformat this server and reorganize the disks.
> >>I'll add 4gb of memory
> >>
> >>We allready have SQL Server 2000 enterprise on this server!
> >>
> >>What are your idea about the disk organization?
> >>what do you think about this :
> >>6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip)
> >
> > (for
> >
> >>tempdb and windows swap file)
> >>
> >>or the 8 disks in Raid 0+1 (strip + mirror) ?
> >>
> >>does the AWE option is recommended in this configuration?
> >>
> >>because this server will support staging area (6) and datawarehouses
(6)
> >
> > => >
> >>at least 12 databases
> >>Does it better to create 2 instance of SQL Server? (1 for staging area
> >>database, second for datawarehouse database) and starting the staging
> >>instance only when needed
> >>Does this configuration is better than all in 1 instance?
> >>
> >>thanks for your ideas.
> >>
> >>"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> >>news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
> >>
> >>
> >>I would like to give more memory to AS, you may run out of memeory
while
> >>processing those cubes.
> >>
> >>Based on my experience, disk is always the bottle neck, especially if
> >>you have a big cube and somebody runs a query which misses all the
> >>aggregates, the performance will be horrible. You need to fine tune
the
> >>aggregates, also you may want to stree test your environment before
> >>putting it into production.
> >>
> >>--
> >>Eric Li
> >>SQL DBA
> >>MCDBA
> >>
> >>
> >>Jéjé wrote:
> >>
> >>
> >>Hi,
> >>
> >>I've to recommend a new server for a datawarehouse server which
contain
> >>
> >>some
> >>
> >>
> >>data warehouse databases (5 databases of 5Gb each)
> >>
> >>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 *
18
> >>
> >>gb
> >>
> >>
> >>in raid 0 for the tempdb)
> >>
> >>there is 200 users for the system for the moment (not concurrently)
> >>and the usage will be low:
> >>* OLAP Cubes
> >>* Reports with reporting services
> >>* Sharepoint web site (not portal server)
> >>
> >>So, I'll plan to use SQL server2000 standard edition with a by
> >
> > processor
> >
> >>licensing
> >>SQL Server will use 3gb of RAM (standard edition is limited)
> >>Analysis services will use 1gb of RAM
> >>and the rest is for sharepoint and reporting services
> >>
> >>But I'm not sure about the processor...
> >>
> >>what do you think about this?
> >>
> >>I know that putting all in 1 server is not good, but we have not the
> >>
> >>budget
> >>
> >>
> >>to get 2 SQL Server licenses.
> >>Does another licensing mode can be used in my case?
> >>
> >>Maybe I can put sharepoint on a Win 2003 Web edition, and all the
other
> >>components on a second biggest server?
> >>
> >>what are your recommendations?
> >>
> >>thanks.
> >>
> >>Jerome.
> >>
> >>
> >>
> >>
> >>
> >
> >
>|||This is a very reasonable config. based on the hardware you have. If
it's only a weekly load and your current DW is only about 4 GB, then I
won't worry about reprocessing, unless your fact table is from a very
complicated query, otherwise it shouldn't take too long.
I would schedule a job to archive those cubes after each full process,
it will save your current cube structure and all the aggregate config.
Eric Li
SQL DBA
MCDBA
Jéjé wrote:
> ok, last try ;-)
> 8 Disks available today...
> Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
> DAT backup (32Gb available)
> Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
> + tempdb
> Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
> In this case I'm secure (except the cubes, but I can recreate these cubes
> through a complete process)
> I'm a small performance impact during cube process, but its a weekly
> loading. I'm not looking for a perfect process time of my cubes.
> Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
> the Raid 5)
> Later, If we need more power will add some disks to share the workload
> regarding the real usage of the server.
> "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...
>>I won't do that, 1 disk for all the cubes is a bad idea.
>>I suppose your users will create ah-hoc reports, so they will create
>>reports on the fly, most likely they will miss aggregates, if that
>>happens, AS will read the whole file. It's just like you join some
>>tables without using the index, you end up with disk scan. In OLTP
>>world, you would like to put each individual table on differnet disk,
>>same idea applies to AS OLAP, but of course it's not practicle. So I
>>would create RAID 0 and put my cubes there, and update aggregates (usage
>>based) once in a while. Beware that, if you put alot aggregates, it will
>>take much longer to process those cubes, and during cube processing,
>>your CPU will probably shot close to 80%~100%, in other words, your box
>>is pretty much unaccessable during that time. If your total DW is only
>>4GB, that's only about 600MB per DW, which is not too bad, assuming you
>>don't have many concurrent users and they don't do crazy stuff. I used
>>to manage 50GB cubes and each time a user drill down to the lowest level
>>and miss the aggregates, it just kill the box.
>>
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>Jéjé wrote:
>>
>>What do you think if I dedicate 1 disk for the cubes ?
>>So...
>>5 disks in RAID 5 for my databases
>>2 in Raid 0 for tempdb
>>1 disk for the cubes
>>I'll have a lot of Reporting Services reports with some SQL access.
>>My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes.
> (25gb of
>>atawarehouses)
>>
>>"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
>>news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
>>
>>It all depends on how big is your DW and staging DB. I would like to
>>seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
>>intensive. You should enable AWE to take advantage of extra RAM. If you
>>create enough aggregates, I think your current disk config. is OK. If
>>possible, I would like to dedicate some disks exclusively for AS. You
>>can always rebuild those cubes if neccessary, but again, depends on your
>> bussiness requirement, you may not able to do this.
>>Depends on the size of your cubes, you may run into some performance
>>problem at the beginning, you have to keep fine tuning your aggreates.
>>Performancewise, I don't think 2 instance will help you much.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>Jéjé wrote:
>>
>>well, finally we have an option to reuse an existing server.
>>With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
>>And we'll have 6 small datawarehouses!
>>I'll reformat this server and reorganize the disks.
>>I'll add 4gb of memory
>>We allready have SQL Server 2000 enterprise on this server!
>>What are your idea about the disk organization?
>>what do you think about this :
>>6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip)
>>(for
>>
>>tempdb and windows swap file)
>>or the 8 disks in Raid 0+1 (strip + mirror) ?
>>does the AWE option is recommended in this configuration?
>>because this server will support staging area (6) and datawarehouses
> (6)
>>=>>
>>at least 12 databases
>>Does it better to create 2 instance of SQL Server? (1 for staging area
>>database, second for datawarehouse database) and starting the staging
>>instance only when needed
>>Does this configuration is better than all in 1 instance?
>>thanks for your ideas.
>>"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
>>news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>>
>>I would like to give more memory to AS, you may run out of memeory
> while
>>processing those cubes.
>>Based on my experience, disk is always the bottle neck, especially if
>>you have a big cube and somebody runs a query which misses all the
>>aggregates, the performance will be horrible. You need to fine tune
> the
>>aggregates, also you may want to stree test your environment before
>>putting it into production.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>
>>Jéjé wrote:
>>
>>>Hi,
>>>
>>>I've to recommend a new server for a datawarehouse server which
> contain
>>some
>>
>>>data warehouse databases (5 databases of 5Gb each)
>>>
>>>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 *
> 18
>>gb
>>
>>>in raid 0 for the tempdb)
>>>
>>>there is 200 users for the system for the moment (not concurrently)
>>>and the usage will be low:
>>>* OLAP Cubes
>>>* Reports with reporting services
>>>* Sharepoint web site (not portal server)
>>>
>>>So, I'll plan to use SQL server2000 standard edition with a by
>>processor
>>
>>>licensing
>>>SQL Server will use 3gb of RAM (standard edition is limited)
>>>Analysis services will use 1gb of RAM
>>>and the rest is for sharepoint and reporting services
>>>
>>>But I'm not sure about the processor...
>>>
>>>what do you think about this?
>>>
>>>I know that putting all in 1 server is not good, but we have not the
>>budget
>>
>>>to get 2 SQL Server licenses.
>>>Does another licensing mode can be used in my case?
>>>
>>>Maybe I can put sharepoint on a Win 2003 Web edition, and all the
> other
>>>components on a second biggest server?
>>>
>>>what are your recommendations?
>>>
>>>thanks.
>>>
>>>Jerome.
>>>
>>>
>>
>>
>
I've to recommend a new server for a datawarehouse server which contain some
data warehouse databases (5 databases of 5Gb each)
I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
in raid 0 for the tempdb)
there is 200 users for the system for the moment (not concurrently)
and the usage will be low:
* OLAP Cubes
* Reports with reporting services
* Sharepoint web site (not portal server)
So, I'll plan to use SQL server2000 standard edition with a by processor
licensing
SQL Server will use 3gb of RAM (standard edition is limited)
Analysis services will use 1gb of RAM
and the rest is for sharepoint and reporting services
But I'm not sure about the processor...
what do you think about this?
I know that putting all in 1 server is not good, but we have not the budget
to get 2 SQL Server licenses.
Does another licensing mode can be used in my case?
Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
components on a second biggest server?
what are your recommendations?
thanks.
Jerome.I would like to give more memory to AS, you may run out of memeory while
processing those cubes.
Based on my experience, disk is always the bottle neck, especially if
you have a big cube and somebody runs a query which misses all the
aggregates, the performance will be horrible. You need to fine tune the
aggregates, also you may want to stree test your environment before
putting it into production.
--
Eric Li
SQL DBA
MCDBA
Jéjé wrote:
> Hi,
> I've to recommend a new server for a datawarehouse server which contain some
> data warehouse databases (5 databases of 5Gb each)
> I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18 gb
> in raid 0 for the tempdb)
> there is 200 users for the system for the moment (not concurrently)
> and the usage will be low:
> * OLAP Cubes
> * Reports with reporting services
> * Sharepoint web site (not portal server)
> So, I'll plan to use SQL server2000 standard edition with a by processor
> licensing
> SQL Server will use 3gb of RAM (standard edition is limited)
> Analysis services will use 1gb of RAM
> and the rest is for sharepoint and reporting services
> But I'm not sure about the processor...
> what do you think about this?
> I know that putting all in 1 server is not good, but we have not the budget
> to get 2 SQL Server licenses.
> Does another licensing mode can be used in my case?
> Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> components on a second biggest server?
> what are your recommendations?
> thanks.
> Jerome.
>|||well, finally we have an option to reuse an existing server.
With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
And we'll have 6 small datawarehouses!
I'll reformat this server and reorganize the disks.
I'll add 4gb of memory
We allready have SQL Server 2000 enterprise on this server!
What are your idea about the disk organization?
what do you think about this :
6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
tempdb and windows swap file)
or the 8 disks in Raid 0+1 (strip + mirror) ?
does the AWE option is recommended in this configuration?
because this server will support staging area (6) and datawarehouses (6) =at least 12 databases
Does it better to create 2 instance of SQL Server? (1 for staging area
database, second for datawarehouse database) and starting the staging
instance only when needed
Does this configuration is better than all in 1 instance?
thanks for your ideas.
"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
> I would like to give more memory to AS, you may run out of memeory while
> processing those cubes.
> Based on my experience, disk is always the bottle neck, especially if
> you have a big cube and somebody runs a query which misses all the
> aggregates, the performance will be horrible. You need to fine tune the
> aggregates, also you may want to stree test your environment before
> putting it into production.
> --
> Eric Li
> SQL DBA
> MCDBA
>
> Jéjé wrote:
> > Hi,
> >
> > I've to recommend a new server for a datawarehouse server which contain
some
> > data warehouse databases (5 databases of 5Gb each)
> >
> > I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18
gb
> > in raid 0 for the tempdb)
> >
> > there is 200 users for the system for the moment (not concurrently)
> > and the usage will be low:
> > * OLAP Cubes
> > * Reports with reporting services
> > * Sharepoint web site (not portal server)
> >
> > So, I'll plan to use SQL server2000 standard edition with a by processor
> > licensing
> > SQL Server will use 3gb of RAM (standard edition is limited)
> > Analysis services will use 1gb of RAM
> > and the rest is for sharepoint and reporting services
> >
> > But I'm not sure about the processor...
> >
> > what do you think about this?
> >
> > I know that putting all in 1 server is not good, but we have not the
budget
> > to get 2 SQL Server licenses.
> > Does another licensing mode can be used in my case?
> >
> > Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> > components on a second biggest server?
> >
> > what are your recommendations?
> >
> > thanks.
> >
> > Jerome.
> >
> >|||It all depends on how big is your DW and staging DB. I would like to
seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
intensive. You should enable AWE to take advantage of extra RAM. If you
create enough aggregates, I think your current disk config. is OK. If
possible, I would like to dedicate some disks exclusively for AS. You
can always rebuild those cubes if neccessary, but again, depends on your
bussiness requirement, you may not able to do this.
Depends on the size of your cubes, you may run into some performance
problem at the beginning, you have to keep fine tuning your aggreates.
Performancewise, I don't think 2 instance will help you much.
--
Eric Li
SQL DBA
MCDBA
Jéjé wrote:
> well, finally we have an option to reuse an existing server.
> With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> And we'll have 6 small datawarehouses!
> I'll reformat this server and reorganize the disks.
> I'll add 4gb of memory
> We allready have SQL Server 2000 enterprise on this server!
> What are your idea about the disk organization?
> what do you think about this :
> 6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip) (for
> tempdb and windows swap file)
> or the 8 disks in Raid 0+1 (strip + mirror) ?
> does the AWE option is recommended in this configuration?
> because this server will support staging area (6) and datawarehouses (6) => at least 12 databases
> Does it better to create 2 instance of SQL Server? (1 for staging area
> database, second for datawarehouse database) and starting the staging
> instance only when needed
> Does this configuration is better than all in 1 instance?
> thanks for your ideas.
> "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>>I would like to give more memory to AS, you may run out of memeory while
>>processing those cubes.
>>Based on my experience, disk is always the bottle neck, especially if
>>you have a big cube and somebody runs a query which misses all the
>>aggregates, the performance will be horrible. You need to fine tune the
>>aggregates, also you may want to stree test your environment before
>>putting it into production.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>
>>Jéjé wrote:
>>Hi,
>>I've to recommend a new server for a datawarehouse server which contain
> some
>>data warehouse databases (5 databases of 5Gb each)
>>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18
> gb
>>in raid 0 for the tempdb)
>>there is 200 users for the system for the moment (not concurrently)
>>and the usage will be low:
>>* OLAP Cubes
>>* Reports with reporting services
>>* Sharepoint web site (not portal server)
>>So, I'll plan to use SQL server2000 standard edition with a by processor
>>licensing
>>SQL Server will use 3gb of RAM (standard edition is limited)
>>Analysis services will use 1gb of RAM
>>and the rest is for sharepoint and reporting services
>>But I'm not sure about the processor...
>>what do you think about this?
>>I know that putting all in 1 server is not good, but we have not the
> budget
>>to get 2 SQL Server licenses.
>>Does another licensing mode can be used in my case?
>>Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
>>components on a second biggest server?
>>what are your recommendations?
>>thanks.
>>Jerome.
>>
>
>|||What do you think if I dedicate 1 disk for the cubes ?
So...
5 disks in RAID 5 for my databases
2 in Raid 0 for tempdb
1 disk for the cubes
I'll have a lot of Reporting Services reports with some SQL access.
My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
atawarehouses)
"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
> It all depends on how big is your DW and staging DB. I would like to
> seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
> intensive. You should enable AWE to take advantage of extra RAM. If you
> create enough aggregates, I think your current disk config. is OK. If
> possible, I would like to dedicate some disks exclusively for AS. You
> can always rebuild those cubes if neccessary, but again, depends on your
> bussiness requirement, you may not able to do this.
> Depends on the size of your cubes, you may run into some performance
> problem at the beginning, you have to keep fine tuning your aggreates.
> Performancewise, I don't think 2 instance will help you much.
> --
> Eric Li
> SQL DBA
> MCDBA
> Jéjé wrote:
> > well, finally we have an option to reuse an existing server.
> > With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> > And we'll have 6 small datawarehouses!
> >
> > I'll reformat this server and reorganize the disks.
> > I'll add 4gb of memory
> >
> > We allready have SQL Server 2000 enterprise on this server!
> >
> > What are your idea about the disk organization?
> > what do you think about this :
> > 6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip)
(for
> > tempdb and windows swap file)
> >
> > or the 8 disks in Raid 0+1 (strip + mirror) ?
> >
> > does the AWE option is recommended in this configuration?
> >
> > because this server will support staging area (6) and datawarehouses (6)
=> > at least 12 databases
> > Does it better to create 2 instance of SQL Server? (1 for staging area
> > database, second for datawarehouse database) and starting the staging
> > instance only when needed
> > Does this configuration is better than all in 1 instance?
> >
> > thanks for your ideas.
> >
> > "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> > news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
> >
> >>I would like to give more memory to AS, you may run out of memeory while
> >>processing those cubes.
> >>
> >>Based on my experience, disk is always the bottle neck, especially if
> >>you have a big cube and somebody runs a query which misses all the
> >>aggregates, the performance will be horrible. You need to fine tune the
> >>aggregates, also you may want to stree test your environment before
> >>putting it into production.
> >>
> >>--
> >>Eric Li
> >>SQL DBA
> >>MCDBA
> >>
> >>
> >>Jéjé wrote:
> >>
> >>Hi,
> >>
> >>I've to recommend a new server for a datawarehouse server which contain
> >
> > some
> >
> >>data warehouse databases (5 databases of 5Gb each)
> >>
> >>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18
> >
> > gb
> >
> >>in raid 0 for the tempdb)
> >>
> >>there is 200 users for the system for the moment (not concurrently)
> >>and the usage will be low:
> >>* OLAP Cubes
> >>* Reports with reporting services
> >>* Sharepoint web site (not portal server)
> >>
> >>So, I'll plan to use SQL server2000 standard edition with a by
processor
> >>licensing
> >>SQL Server will use 3gb of RAM (standard edition is limited)
> >>Analysis services will use 1gb of RAM
> >>and the rest is for sharepoint and reporting services
> >>
> >>But I'm not sure about the processor...
> >>
> >>what do you think about this?
> >>
> >>I know that putting all in 1 server is not good, but we have not the
> >
> > budget
> >
> >>to get 2 SQL Server licenses.
> >>Does another licensing mode can be used in my case?
> >>
> >>Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
> >>components on a second biggest server?
> >>
> >>what are your recommendations?
> >>
> >>thanks.
> >>
> >>Jerome.
> >>
> >>
> >
> >
> >
>|||I won't do that, 1 disk for all the cubes is a bad idea.
I suppose your users will create ah-hoc reports, so they will create
reports on the fly, most likely they will miss aggregates, if that
happens, AS will read the whole file. It's just like you join some
tables without using the index, you end up with disk scan. In OLTP
world, you would like to put each individual table on differnet disk,
same idea applies to AS OLAP, but of course it's not practicle. So I
would create RAID 0 and put my cubes there, and update aggregates (usage
based) once in a while. Beware that, if you put alot aggregates, it will
take much longer to process those cubes, and during cube processing,
your CPU will probably shot close to 80%~100%, in other words, your box
is pretty much unaccessable during that time. If your total DW is only
4GB, that's only about 600MB per DW, which is not too bad, assuming you
don't have many concurrent users and they don't do crazy stuff. I used
to manage 50GB cubes and each time a user drill down to the lowest level
and miss the aggregates, it just kill the box.
Eric Li
SQL DBA
MCDBA
Jéjé wrote:
> What do you think if I dedicate 1 disk for the cubes ?
> So...
> 5 disks in RAID 5 for my databases
> 2 in Raid 0 for tempdb
> 1 disk for the cubes
> I'll have a lot of Reporting Services reports with some SQL access.
> My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes. (25gb of
> atawarehouses)
>
> "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
>>It all depends on how big is your DW and staging DB. I would like to
>>seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
>>intensive. You should enable AWE to take advantage of extra RAM. If you
>>create enough aggregates, I think your current disk config. is OK. If
>>possible, I would like to dedicate some disks exclusively for AS. You
>>can always rebuild those cubes if neccessary, but again, depends on your
>> bussiness requirement, you may not able to do this.
>>Depends on the size of your cubes, you may run into some performance
>>problem at the beginning, you have to keep fine tuning your aggreates.
>>Performancewise, I don't think 2 instance will help you much.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>Jéjé wrote:
>>
>>well, finally we have an option to reuse an existing server.
>>With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
>>And we'll have 6 small datawarehouses!
>>I'll reformat this server and reorganize the disks.
>>I'll add 4gb of memory
>>We allready have SQL Server 2000 enterprise on this server!
>>What are your idea about the disk organization?
>>what do you think about this :
>>6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip)
> (for
>>tempdb and windows swap file)
>>or the 8 disks in Raid 0+1 (strip + mirror) ?
>>does the AWE option is recommended in this configuration?
>>because this server will support staging area (6) and datawarehouses (6)
> =>>at least 12 databases
>>Does it better to create 2 instance of SQL Server? (1 for staging area
>>database, second for datawarehouse database) and starting the staging
>>instance only when needed
>>Does this configuration is better than all in 1 instance?
>>thanks for your ideas.
>>"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
>>news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>>
>>I would like to give more memory to AS, you may run out of memeory while
>>processing those cubes.
>>Based on my experience, disk is always the bottle neck, especially if
>>you have a big cube and somebody runs a query which misses all the
>>aggregates, the performance will be horrible. You need to fine tune the
>>aggregates, also you may want to stree test your environment before
>>putting it into production.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>
>>Jéjé wrote:
>>
>>Hi,
>>I've to recommend a new server for a datawarehouse server which contain
>>some
>>
>>data warehouse databases (5 databases of 5Gb each)
>>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 * 18
>>gb
>>
>>in raid 0 for the tempdb)
>>there is 200 users for the system for the moment (not concurrently)
>>and the usage will be low:
>>* OLAP Cubes
>>* Reports with reporting services
>>* Sharepoint web site (not portal server)
>>So, I'll plan to use SQL server2000 standard edition with a by
> processor
>>licensing
>>SQL Server will use 3gb of RAM (standard edition is limited)
>>Analysis services will use 1gb of RAM
>>and the rest is for sharepoint and reporting services
>>But I'm not sure about the processor...
>>what do you think about this?
>>I know that putting all in 1 server is not good, but we have not the
>>budget
>>
>>to get 2 SQL Server licenses.
>>Does another licensing mode can be used in my case?
>>Maybe I can put sharepoint on a Win 2003 Web edition, and all the other
>>components on a second biggest server?
>>what are your recommendations?
>>thanks.
>>Jerome.
>>
>>
>|||ok, last try ;-)
8 Disks available today...
Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
DAT backup (32Gb available)
Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
+ tempdb
Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
In this case I'm secure (except the cubes, but I can recreate these cubes
through a complete process)
I'm a small performance impact during cube process, but its a weekly
loading. I'm not looking for a perfect process time of my cubes.
Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
the Raid 5)
Later, If we need more power will add some disks to share the workload
regarding the real usage of the server.
"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...
> I won't do that, 1 disk for all the cubes is a bad idea.
> I suppose your users will create ah-hoc reports, so they will create
> reports on the fly, most likely they will miss aggregates, if that
> happens, AS will read the whole file. It's just like you join some
> tables without using the index, you end up with disk scan. In OLTP
> world, you would like to put each individual table on differnet disk,
> same idea applies to AS OLAP, but of course it's not practicle. So I
> would create RAID 0 and put my cubes there, and update aggregates (usage
> based) once in a while. Beware that, if you put alot aggregates, it will
> take much longer to process those cubes, and during cube processing,
> your CPU will probably shot close to 80%~100%, in other words, your box
> is pretty much unaccessable during that time. If your total DW is only
> 4GB, that's only about 600MB per DW, which is not too bad, assuming you
> don't have many concurrent users and they don't do crazy stuff. I used
> to manage 50GB cubes and each time a user drill down to the lowest level
> and miss the aggregates, it just kill the box.
>
> --
> Eric Li
> SQL DBA
> MCDBA
> Jéjé wrote:
> > What do you think if I dedicate 1 disk for the cubes ?
> > So...
> > 5 disks in RAID 5 for my databases
> > 2 in Raid 0 for tempdb
> > 1 disk for the cubes
> >
> > I'll have a lot of Reporting Services reports with some SQL access.
> > My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes.
(25gb of
> > atawarehouses)
> >
> >
> > "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> > news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
> >
> >>It all depends on how big is your DW and staging DB. I would like to
> >>seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
> >>intensive. You should enable AWE to take advantage of extra RAM. If you
> >>create enough aggregates, I think your current disk config. is OK. If
> >>possible, I would like to dedicate some disks exclusively for AS. You
> >>can always rebuild those cubes if neccessary, but again, depends on your
> >> bussiness requirement, you may not able to do this.
> >>
> >>Depends on the size of your cubes, you may run into some performance
> >>problem at the beginning, you have to keep fine tuning your aggreates.
> >>
> >>Performancewise, I don't think 2 instance will help you much.
> >>
> >>--
> >>Eric Li
> >>SQL DBA
> >>MCDBA
> >>
> >>Jéjé wrote:
> >>
> >>
> >>well, finally we have an option to reuse an existing server.
> >>With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
> >>And we'll have 6 small datawarehouses!
> >>
> >>I'll reformat this server and reorganize the disks.
> >>I'll add 4gb of memory
> >>
> >>We allready have SQL Server 2000 enterprise on this server!
> >>
> >>What are your idea about the disk organization?
> >>what do you think about this :
> >>6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip)
> >
> > (for
> >
> >>tempdb and windows swap file)
> >>
> >>or the 8 disks in Raid 0+1 (strip + mirror) ?
> >>
> >>does the AWE option is recommended in this configuration?
> >>
> >>because this server will support staging area (6) and datawarehouses
(6)
> >
> > => >
> >>at least 12 databases
> >>Does it better to create 2 instance of SQL Server? (1 for staging area
> >>database, second for datawarehouse database) and starting the staging
> >>instance only when needed
> >>Does this configuration is better than all in 1 instance?
> >>
> >>thanks for your ideas.
> >>
> >>"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> >>news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
> >>
> >>
> >>I would like to give more memory to AS, you may run out of memeory
while
> >>processing those cubes.
> >>
> >>Based on my experience, disk is always the bottle neck, especially if
> >>you have a big cube and somebody runs a query which misses all the
> >>aggregates, the performance will be horrible. You need to fine tune
the
> >>aggregates, also you may want to stree test your environment before
> >>putting it into production.
> >>
> >>--
> >>Eric Li
> >>SQL DBA
> >>MCDBA
> >>
> >>
> >>Jéjé wrote:
> >>
> >>
> >>Hi,
> >>
> >>I've to recommend a new server for a datawarehouse server which
contain
> >>
> >>some
> >>
> >>
> >>data warehouse databases (5 databases of 5Gb each)
> >>
> >>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 *
18
> >>
> >>gb
> >>
> >>
> >>in raid 0 for the tempdb)
> >>
> >>there is 200 users for the system for the moment (not concurrently)
> >>and the usage will be low:
> >>* OLAP Cubes
> >>* Reports with reporting services
> >>* Sharepoint web site (not portal server)
> >>
> >>So, I'll plan to use SQL server2000 standard edition with a by
> >
> > processor
> >
> >>licensing
> >>SQL Server will use 3gb of RAM (standard edition is limited)
> >>Analysis services will use 1gb of RAM
> >>and the rest is for sharepoint and reporting services
> >>
> >>But I'm not sure about the processor...
> >>
> >>what do you think about this?
> >>
> >>I know that putting all in 1 server is not good, but we have not the
> >>
> >>budget
> >>
> >>
> >>to get 2 SQL Server licenses.
> >>Does another licensing mode can be used in my case?
> >>
> >>Maybe I can put sharepoint on a Win 2003 Web edition, and all the
other
> >>components on a second biggest server?
> >>
> >>what are your recommendations?
> >>
> >>thanks.
> >>
> >>Jerome.
> >>
> >>
> >>
> >>
> >>
> >
> >
>|||This is a very reasonable config. based on the hardware you have. If
it's only a weekly load and your current DW is only about 4 GB, then I
won't worry about reprocessing, unless your fact table is from a very
complicated query, otherwise it shouldn't take too long.
I would schedule a job to archive those cubes after each full process,
it will save your current cube structure and all the aggregate config.
Eric Li
SQL DBA
MCDBA
Jéjé wrote:
> ok, last try ;-)
> 8 Disks available today...
> Disk 1 - 2 --> C: --> Raid 1 (mirror) for the OS and local backups prior to
> DAT backup (32Gb available)
> Disk 3 - 4 --> D: --> Raid 0 (stripped) for the OLAP Cubes (64Gb available)
> + tempdb
> Disk 5 - 8 --> E: --> Raid 5 for my databases (96Gb available)
> In this case I'm secure (except the cubes, but I can recreate these cubes
> through a complete process)
> I'm a small performance impact during cube process, but its a weekly
> loading. I'm not looking for a perfect process time of my cubes.
> Also in this configuration I'm loosing only 2 disks (1 for the mirror, 1 for
> the Raid 5)
> Later, If we need more power will add some disks to share the workload
> regarding the real usage of the server.
> "Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
> news:OV2XE$LTEHA.2972@.TK2MSFTNGP12.phx.gbl...
>>I won't do that, 1 disk for all the cubes is a bad idea.
>>I suppose your users will create ah-hoc reports, so they will create
>>reports on the fly, most likely they will miss aggregates, if that
>>happens, AS will read the whole file. It's just like you join some
>>tables without using the index, you end up with disk scan. In OLTP
>>world, you would like to put each individual table on differnet disk,
>>same idea applies to AS OLAP, but of course it's not practicle. So I
>>would create RAID 0 and put my cubes there, and update aggregates (usage
>>based) once in a while. Beware that, if you put alot aggregates, it will
>>take much longer to process those cubes, and during cube processing,
>>your CPU will probably shot close to 80%~100%, in other words, your box
>>is pretty much unaccessable during that time. If your total DW is only
>>4GB, that's only about 600MB per DW, which is not too bad, assuming you
>>don't have many concurrent users and they don't do crazy stuff. I used
>>to manage 50GB cubes and each time a user drill down to the lowest level
>>and miss the aggregates, it just kill the box.
>>
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>Jéjé wrote:
>>
>>What do you think if I dedicate 1 disk for the cubes ?
>>So...
>>5 disks in RAID 5 for my databases
>>2 in Raid 0 for tempdb
>>1 disk for the cubes
>>I'll have a lot of Reporting Services reports with some SQL access.
>>My OLAP cubes are less used with an estimate of 2 to 4 gb of cubes.
> (25gb of
>>atawarehouses)
>>
>>"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
>>news:OYhh53JTEHA.3988@.TK2MSFTNGP10.phx.gbl...
>>
>>It all depends on how big is your DW and staging DB. I would like to
>>seperate OLTP and OLAP DB because re-processing cubes is very disk/CPU
>>intensive. You should enable AWE to take advantage of extra RAM. If you
>>create enough aggregates, I think your current disk config. is OK. If
>>possible, I would like to dedicate some disks exclusively for AS. You
>>can always rebuild those cubes if neccessary, but again, depends on your
>> bussiness requirement, you may not able to do this.
>>Depends on the size of your cubes, you may run into some performance
>>problem at the beginning, you have to keep fine tuning your aggreates.
>>Performancewise, I don't think 2 instance will help you much.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>Jéjé wrote:
>>
>>well, finally we have an option to reuse an existing server.
>>With 8 * 32gb SCSI drives, 4gb of memory and 2 proc.
>>And we'll have 6 small datawarehouses!
>>I'll reformat this server and reorganize the disks.
>>I'll add 4gb of memory
>>We allready have SQL Server 2000 enterprise on this server!
>>What are your idea about the disk organization?
>>what do you think about this :
>>6 disks in RAID 5 (for OS, databases, olap cube) + 2 in RAID 0 (strip)
>>(for
>>
>>tempdb and windows swap file)
>>or the 8 disks in Raid 0+1 (strip + mirror) ?
>>does the AWE option is recommended in this configuration?
>>because this server will support staging area (6) and datawarehouses
> (6)
>>=>>
>>at least 12 databases
>>Does it better to create 2 instance of SQL Server? (1 for staging area
>>database, second for datawarehouse database) and starting the staging
>>instance only when needed
>>Does this configuration is better than all in 1 instance?
>>thanks for your ideas.
>>"Eric.Li" <anonymous@.microsoftnews.org> a écrit dans le message de
>>news:OD$WSzkSEHA.3756@.TK2MSFTNGP11.phx.gbl...
>>
>>I would like to give more memory to AS, you may run out of memeory
> while
>>processing those cubes.
>>Based on my experience, disk is always the bottle neck, especially if
>>you have a big cube and somebody runs a query which misses all the
>>aggregates, the performance will be horrible. You need to fine tune
> the
>>aggregates, also you may want to stree test your environment before
>>putting it into production.
>>--
>>Eric Li
>>SQL DBA
>>MCDBA
>>
>>Jéjé wrote:
>>
>>>Hi,
>>>
>>>I've to recommend a new server for a datawarehouse server which
> contain
>>some
>>
>>>data warehouse databases (5 databases of 5Gb each)
>>>
>>>I plan 6gb of memory and some SCSI drives (5 * 18 gb in raid 5 + 2 *
> 18
>>gb
>>
>>>in raid 0 for the tempdb)
>>>
>>>there is 200 users for the system for the moment (not concurrently)
>>>and the usage will be low:
>>>* OLAP Cubes
>>>* Reports with reporting services
>>>* Sharepoint web site (not portal server)
>>>
>>>So, I'll plan to use SQL server2000 standard edition with a by
>>processor
>>
>>>licensing
>>>SQL Server will use 3gb of RAM (standard edition is limited)
>>>Analysis services will use 1gb of RAM
>>>and the rest is for sharepoint and reporting services
>>>
>>>But I'm not sure about the processor...
>>>
>>>what do you think about this?
>>>
>>>I know that putting all in 1 server is not good, but we have not the
>>budget
>>
>>>to get 2 SQL Server licenses.
>>>Does another licensing mode can be used in my case?
>>>
>>>Maybe I can put sharepoint on a Win 2003 Web edition, and all the
> other
>>>components on a second biggest server?
>>>
>>>what are your recommendations?
>>>
>>>thanks.
>>>
>>>Jerome.
>>>
>>>
>>
>>
>
Subscribe to:
Posts (Atom)