Showing posts with label inserts. Show all posts
Showing posts with label inserts. Show all posts

Monday, March 12, 2012

Hardware requirement

Hello!
Does anyone have any idea what the hardware requirements for a SQL2000 box
would be if I would like to process about 20000 - 30000 inserts per second?
We need to insert large amount of data in SQL database and that would be
peek number of inserts that we need. Any rough numbers? (number of
processors, RAM, disk subsytem configuration...) Database size and number of
user conections are not an important factor at the time since they are quite
small (100GB, 50 connections).
Does anyone have similar processing power on SQL?
Thanks
Dan20-30k inserts/sec is possible on a 2 CPU Xeon, but it
depends on exactly what you are doing and how.
if the only meaningful load is the inserts, then a dual
processor system should be able to handle your load,
otherwise, you might go to a 4 CPU box
RAM and disks will depend on the specifics of what your
are doing
i will be talking on this subject at the next SQL Server
Magazine Connections conference (www.sqlconnections.com)
-joe chang
>--Original Message--
>Hello!
>Does anyone have any idea what the hardware requirements
for a SQL2000 box
>would be if I would like to process about 20000 - 30000
inserts per second?
>We need to insert large amount of data in SQL database
and that would be
>peek number of inserts that we need. Any rough numbers?
(number of
>processors, RAM, disk subsytem configuration...) Database
size and number of
>user conections are not an important factor at the time
since they are quite
>small (100GB, 50 connections).
>Does anyone have similar processing power on SQL?
>Thanks
>Dan
>
>.
>

Monday, February 27, 2012

handling nulls with inserts

Hi,

I've got a program that has a form. ?On the form there are several fields which are optional that I would prefer to remain as NULL in the database (rather than a default value) if the user doesn't fill them out. ?I'm using stored procedures for my inserts (sql 2000 + C#). ?

How do most people handle these situations? ?Right now I have a seperate function which receives the parameter values as params and then handles creating the parameters and executing the stored procedure. ?I'm not sure how to handle passing that I want one of those to be null, at least not without having multiple functions and possibly various stored procedures for different possibilities.

Thanks.

I would recommend not using stored procedures if this is what you want to accomplish. Like you said you would have to create multiple stored procedures for each variation. Instead create the insert in your code. I use a string builder to build an SQL statement. Within the logic of your system you can put if then statements to determine what the user inputed and therefore how to build the SQL and correlating params.

Have you tried to pass the value DBNull.Value to the stored procedure? Not sure that this would work, but it might be worth a try if you want to keep your stored procedures.

Hope that helps.

Handling errors in transactional replication

Hello,
I have this problem, I have transactional replication
setup with 2 table as article for a publisher. Consider a
transaction which inserts one row into each of these
table in the publisher database. At subscriber database
one of the commands suceeds and the other fails (lets say
because of duplicate value for a key field).
What I would like is ...
1. Enable the error to be logged to a application table.
2. Have both the commands fail (not just one)
3. Have the replication continue without erroring out
4. Have the erroneous trasction be tried at a later time.
Could help me resolving this sticky problem.
Answers on how to acheive any of the points are welcome
too.
Thanks,
Ramuk
If you have a transaction on the publisher, you may check @.@.error and then
call rollback, but whether you rollback or not the sp is still executed on
the subscriber. This situation is altered (no subscriber call) if you set
the transaction isolation level to serializable. This is important to do
because even if you trap the same error in the transaction on the subscriber
and rollback there, the error is registered and the distribution agent will
fail. SkipErrors would avoid this problem but ideally the call shouldn't be
sent from the publisher to the subscriber if it has already failed once.
Logging of this error could be done from within the transaction using
xp_logevent or simply inserting to a SQL table.
HTH,
Paul
|||Thanks for the response ... though the problem I face is, that the transaction suceeds on the publisher. Its only on the subscriber that it fails.
Any ideas?
-- Paul Ibison wrote: --
If you have a transaction on the publisher, you may check @.@.error and then
call rollback, but whether you rollback or not the sp is still executed on
the subscriber. This situation is altered (no subscriber call) if you set
the transaction isolation level to serializable. This is important to do
because even if you trap the same error in the transaction on the subscriber
and rollback there, the error is registered and the distribution agent will
fail. SkipErrors would avoid this problem but ideally the call shouldn't be
sent from the publisher to the subscriber if it has already failed once.
Logging of this error could be done from within the transaction using
xp_logevent or simply inserting to a SQL table.
HTH,
Paul
|||Ramul,
in theory this shouldn't be possible! If all the articles are replicated
then the same transaction should be applied.
What is the error on the subscriber?
Regards,
Paul
|||Paul,
We are in the design stage ... so the problem is yet theoretical.
The problem is when two commands which are part of the transaction are applied at the subsciber, it could be possible for one of the commands to fail (because of business logic at subscribers end) .
If I were to go the route of using "Continue on Data Consistency Errors" then the failed command will not be applied on the subscriber but the other command will ... which cause an invalid state. If I do not set "Continue on Data Consistency Errors" ... t
hen the replication would stop... that is the crux of the problem.
So an ideal solution would be for both the commands not being applied in case of an error on any one ... the error being logged into an application table (for purposes of monitoring) ... and if possible the transaction marked for retrial later.
Also another question ... can I get the value of the "Transaction sequence number" at the distributer? Can it be passed to the Custom SP's at the subscriber?
Thanks for the help,
Ramuk.
|||Ramuk,
my recollection is that if the 2 data modifications are wrapped in a
transaction which has error-trapping (or set xact_abort on) then the
transaction will be successfully completely rolled back on an error at the
subscriber. I haven't tried this out in all circumstances and AFAIR you'll
still need -SKIPERRORS. Am off home now, but will retest it out some time
tomorrow.
Regards,
Paul