Wednesday, March 7, 2012
hanging on one table
sometimes (often) my merge replication agent hang on this step:
Processing article 'table1'...
or (occasionally):
The merge process is cleaning up meta data in database 'db1'...
table1 is a large table (22 fields, approx 400.000 rows) with no Primary Key.
I don't have any idea with 'meta data' errors.
After this hang/error we always have this happen again the next time we tried again.
what happen with this? pls help...
TIA
echo
can you post the exact error message you are getting?
Also right click on your problem merge agent and select Agent Properties,
click on steps, and then click on run agent. Then click Edit and at the end
of the commands you find there, hit the space bar, and type -QueryTimeOut
600
Then restart your merge agent.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"echo" <echo@.discussions.microsoft.com> wrote in message
news:922613C3-C666-412F-B35B-F70B2B044F95@.microsoft.com...
> Hi,
> sometimes (often) my merge replication agent hang on this step:
> Processing article 'table1'...
> or (occasionally):
> The merge process is cleaning up meta data in database 'db1'...
> table1 is a large table (22 fields, approx 400.000 rows) with no Primary
Key.
> I don't have any idea with 'meta data' errors.
> After this hang/error we always have this happen again the next time we
tried again.
> what happen with this? pls help...
> TIA
> echo
Monday, February 27, 2012
Handling merge replication foreign key conflicts
What I would like to happen is to have the table B records on the client be deleted by the merge process, and to create a log of the event. I've looked into creating a business logic handler to do this, but I'm not sure what type of conflict this is (UpdateDeleteConflict or otherwise), and I'm not sure that deleting the table B records is something I can do in the business logic handler.
This seems like it would be a common problem in merge replication. I'm not locked into using a custom business logic handler at all. Any suggestions are welcome.
Thanks.
Any ideas on this?|||
presumably you have a CASCADE DELETE to trash the child ("table B") whenever you delete the parent ("tableA") rows
- whether DRI or by trigger
if you were using transactional replication the delete child-then-parent would be part of same atomic transaction
- thus maybe you would get both or neither.
possibly using transactional repl with immediate updating would avoid updates to such ghosts.
however using merge repl [yes I recognise its benefits!] will inevitably have to cope with unconstrained race conditions across the fabric. therefore you would have to cope with this situation
seems that you could
1. write custom resolvers, or
2. delve into the guts of the sprocs that actually execute the I/U/D statements [or use insteadof triggers]
3. simply add extra IsDeleted flag on each tableA,tableB table for soft-delete [actually do hard-delete at quiet time]
HTH
Dick
Handling merge replication foreign key conflicts
What I would like to happen is to have the table B records on the client be deleted by the merge process, and to create a log of the event. I've looked into creating a business logic handler to do this, but I'm not sure what type of conflict this is (UpdateDeleteConflict or otherwise), and I'm not sure that deleting the table B records is something I can do in the business logic handler.
This seems like it would be a common problem in merge replication. I'm not locked into using a custom business logic handler at all. Any suggestions are welcome.
Thanks.
Any ideas on this?|||
presumably you have a CASCADE DELETE to trash the child ("table B") whenever you delete the parent ("tableA") rows
- whether DRI or by trigger
if you were using transactional replication the delete child-then-parent would be part of same atomic transaction
- thus maybe you would get both or neither.
possibly using transactional repl with immediate updating would avoid updates to such ghosts.
however using merge repl [yes I recognise its benefits!] will inevitably have to cope with unconstrained race conditions across the fabric. therefore you would have to cope with this situation
seems that you could
1. write custom resolvers, or
2. delve into the guts of the sprocs that actually execute the I/U/D statements [or use insteadof triggers]
3. simply add extra IsDeleted flag on each tableA,tableB table for soft-delete [actually do hard-delete at quiet time]
HTH
Dick
Handling merge replication foreign key conflicts
What I would like to happen is to have the table B records on the client be deleted by the merge process, and to create a log of the event. I've looked into creating a business logic handler to do this, but I'm not sure what type of conflict this is (UpdateDeleteConflict or otherwise), and I'm not sure that deleting the table B records is something I can do in the business logic handler.
This seems like it would be a common problem in merge replication. I'm not locked into using a custom business logic handler at all. Any suggestions are welcome.
Thanks.
Any ideas on this?|||
presumably you have a CASCADE DELETE to trash the child ("table B") whenever you delete the parent ("tableA") rows
- whether DRI or by trigger
if you were using transactional replication the delete child-then-parent would be part of same atomic transaction
- thus maybe you would get both or neither.
possibly using transactional repl with immediate updating would avoid updates to such ghosts.
however using merge repl [yes I recognise its benefits!] will inevitably have to cope with unconstrained race conditions across the fabric. therefore you would have to cope with this situation
seems that you could
1. write custom resolvers, or
2. delve into the guts of the sprocs that actually execute the I/U/D statements [or use insteadof triggers]
3. simply add extra IsDeleted flag on each tableA,tableB table for soft-delete [actually do hard-delete at quiet time]
HTH
Dick
Handling errors in transactional replication
I have this problem, I have transactional replication
setup with 2 table as article for a publisher. Consider a
transaction which inserts one row into each of these
table in the publisher database. At subscriber database
one of the commands suceeds and the other fails (lets say
because of duplicate value for a key field).
What I would like is ...
1. Enable the error to be logged to a application table.
2. Have both the commands fail (not just one)
3. Have the replication continue without erroring out
4. Have the erroneous trasction be tried at a later time.
Could help me resolving this sticky problem.
Answers on how to acheive any of the points are welcome
too.
Thanks,
Ramuk
If you have a transaction on the publisher, you may check @.@.error and then
call rollback, but whether you rollback or not the sp is still executed on
the subscriber. This situation is altered (no subscriber call) if you set
the transaction isolation level to serializable. This is important to do
because even if you trap the same error in the transaction on the subscriber
and rollback there, the error is registered and the distribution agent will
fail. SkipErrors would avoid this problem but ideally the call shouldn't be
sent from the publisher to the subscriber if it has already failed once.
Logging of this error could be done from within the transaction using
xp_logevent or simply inserting to a SQL table.
HTH,
Paul
|||Thanks for the response ... though the problem I face is, that the transaction suceeds on the publisher. Its only on the subscriber that it fails.
Any ideas?
-- Paul Ibison wrote: --
If you have a transaction on the publisher, you may check @.@.error and then
call rollback, but whether you rollback or not the sp is still executed on
the subscriber. This situation is altered (no subscriber call) if you set
the transaction isolation level to serializable. This is important to do
because even if you trap the same error in the transaction on the subscriber
and rollback there, the error is registered and the distribution agent will
fail. SkipErrors would avoid this problem but ideally the call shouldn't be
sent from the publisher to the subscriber if it has already failed once.
Logging of this error could be done from within the transaction using
xp_logevent or simply inserting to a SQL table.
HTH,
Paul
|||Ramul,
in theory this shouldn't be possible! If all the articles are replicated
then the same transaction should be applied.
What is the error on the subscriber?
Regards,
Paul
|||Paul,
We are in the design stage ... so the problem is yet theoretical.
The problem is when two commands which are part of the transaction are applied at the subsciber, it could be possible for one of the commands to fail (because of business logic at subscribers end) .
If I were to go the route of using "Continue on Data Consistency Errors" then the failed command will not be applied on the subscriber but the other command will ... which cause an invalid state. If I do not set "Continue on Data Consistency Errors" ... t
hen the replication would stop... that is the crux of the problem.
So an ideal solution would be for both the commands not being applied in case of an error on any one ... the error being logged into an application table (for purposes of monitoring) ... and if possible the transaction marked for retrial later.
Also another question ... can I get the value of the "Transaction sequence number" at the distributer? Can it be passed to the Custom SP's at the subscriber?
Thanks for the help,
Ramuk.
|||Ramuk,
my recollection is that if the 2 data modifications are wrapped in a
transaction which has error-trapping (or set xact_abort on) then the
transaction will be successfully completely rolled back on an error at the
subscriber. I haven't tried this out in all circumstances and AFAIR you'll
still need -SKIPERRORS. Am off home now, but will retest it out some time
tomorrow.
Regards,
Paul
Friday, February 24, 2012
handling different Timezones
the subscribers will be in different timezones. Is there way to convert the
datetime fields from the subscriber's to the local database timezone to the
publishers, and vice versa?
Thanks
+---+
William F. Kinsley
Sr. Design Engineer
NextGen Healthcare Info. Sys. Inc
Normally this is done on the application level.
There is no way, other than storing the time as UTC or with time zone
information, that I am aware of doing this.
"William F. Kinsley" <bacardi@.online.nospam> wrote in message
news:%23s5YuzHuEHA.1280@.TK2MSFTNGP10.phx.gbl...
>I planning on using replication(merge most likely) in an environment where
> the subscribers will be in different timezones. Is there way to convert
> the
> datetime fields from the subscriber's to the local database timezone to
> the
> publishers, and vice versa?
> Thanks
> --
> +---+
> William F. Kinsley
> Sr. Design Engineer
> NextGen Healthcare Info. Sys. Inc
>
|||Hi William,
Thanks for your post.
From your descriptions, I understood you would like to know whether there
will be any problems when replicating between different time zones. Have I
understood you? Correct me if I was wrong.
Based on my scope, it won't. We know of many cases where merge is
successfully running across time zones (merge wouldn't be too valuable if
it couldn't ). If you have a look at the usage of getdate in merge's stored
procuedres(identifying those stored procedures would have been useful ),
you will find that merge history tables' use of a datetime (that was
created by getdate) should not be a concern.
Additionally, there is some BOL excerpt for your reference
============
The generation column in these tables acts as a logical clock indicating
when a row was last updated at a given site. Actual datetime values are not
used for marking when changes occur, or deciding conflicts, and there is no
dependence on synchronized clocks between sites. This makes the conflict
detection and resolution algorithms more resilient to time zone differences
and differences between physical clocks on multiple servers. At a given
site, the generation numbers correspond to the order in which changes were
performed by the Merge Agent or by a user at that site.
============
Thank you for your patience and corperation. If you have any questions or
concerns, don't hesitate to let me know. We are here to be of assistance!
Sincerely yours,
Michael Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!
|||Thanks for such a detailed reply, this was really not my question, but still
was very helpful. My question is, if I have a user table with a datetime
column, is there a way for the replication process to automatically convert
the datetime from the source databases timezone to the target database's
time zone? I would prefer not to store everything as UTC since I will have
user created reporting on the data.
Thanks
Bill
""Mingqing Cheng [MSFT]"" <v-mingqc@.online.microsoft.com> wrote in message
news:VIGUFEMuEHA.464@.cpmsftngxa10.phx.gbl...
> Hi William,
> Thanks for your post.
> From your descriptions, I understood you would like to know whether there
> will be any problems when replicating between different time zones. Have I
> understood you? Correct me if I was wrong.
> Based on my scope, it won't. We know of many cases where merge is
> successfully running across time zones (merge wouldn't be too valuable if
> it couldn't ). If you have a look at the usage of getdate in merge's
stored
> procuedres(identifying those stored procedures would have been useful ),
> you will find that merge history tables' use of a datetime (that was
> created by getdate) should not be a concern.
> Additionally, there is some BOL excerpt for your reference
> ============
> The generation column in these tables acts as a logical clock indicating
> when a row was last updated at a given site. Actual datetime values are
not
> used for marking when changes occur, or deciding conflicts, and there is
no
> dependence on synchronized clocks between sites. This makes the conflict
> detection and resolution algorithms more resilient to time zone
differences
> and differences between physical clocks on multiple servers. At a given
> site, the generation numbers correspond to the order in which changes were
> performed by the Merge Agent or by a user at that site.
> ============
> Thank you for your patience and corperation. If you have any questions or
> concerns, don't hesitate to let me know. We are here to be of assistance!
>
> Sincerely yours,
> Michael Cheng
> Online Partner Support Specialist
> Partner Support Group
> Microsoft Global Technical Support Center
> Introduction to Yukon! - http://www.microsoft.com/sql/yukon
> This posting is provided "as is" with no warranties and confers no rights.
> Please reply to newsgroups only, many thanks!
>
|||Hi Bill,
I am sorry for my poor understanding.
If so, I think, as Hilary Cotter had said, there is no such funcationality
provided in SQL Server 2000 and you will have to do it in application
level.
However, it is a great idea to have such expand feature and I'd recommend
that you forward the recommendation to the Microsoft Wish Program:
mswish@.microsoft.com.
Hope the this will be considered in the furture version of SQL Server.
Thank you for your patience and corperation. If you have any questions or
concerns, don't hesitate to let me know. We are here to be of assistance!
Sincerely yours,
Michael Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!
|||I was afraid that was going to be the answer, Thank you for your help
Bill
""Michael Cheng [MSFT]"" <v-mingqc@.online.microsoft.com> wrote in message
news:qBXAuAmuEHA.2692@.cpmsftngxa10.phx.gbl...
> Hi Bill,
> I am sorry for my poor understanding.
> If so, I think, as Hilary Cotter had said, there is no such funcationality
> provided in SQL Server 2000 and you will have to do it in application
> level.
> However, it is a great idea to have such expand feature and I'd recommend
> that you forward the recommendation to the Microsoft Wish Program:
> mswish@.microsoft.com.
> Hope the this will be considered in the furture version of SQL Server.
> Thank you for your patience and corperation. If you have any questions or
> concerns, don't hesitate to let me know. We are here to be of assistance!
>
> Sincerely yours,
> Michael Cheng
> Online Partner Support Specialist
> Partner Support Group
> Microsoft Global Technical Support Center
> Introduction to Yukon! - http://www.microsoft.com/sql/yukon
> This posting is provided "as is" with no warranties and confers no rights.
> Please reply to newsgroups only, many thanks!
>
|||Hi Bill,
Welcome!
Sincerely yours,
Michael Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!
Handling conflicts with a Custom COM resolver
I'm writing a custom COM conflict resolver to handle replication conflicts.
I'm having problems with handling a conflict caused by the violation of a foreign key constraint.
Situation:
Publisher deletes item X
Concurrently the subsriber creates item Y with a foreign key to item X
My goal of the replication is, that the deletion of item X on the publisher is rollbacked and that item Y is replicated to the publisher. So after replication, the publisher and subscriber contain both item X and item Y.
The replication procedure is:
In Upload phase, a conflict (REPOLEChange_UploadInsertFailed) occurs, because item Y cannot be created on the publisher (caused by foreign key violation). SQLServer adds item Y automatically to the MSMerge_tombstone table of the publisher.
In Download phase, the change event 'REPOLEChange_PublisherSystemDelete' happens, because the system tries to delete item Y at the subscriber.
My questions are:
How can I prevent item Y from being deleted on the subscriber?
How can I achieve that item Y is created at the publisher?
The problem is that item Y is put in the MSMerge_tombstone table of the publisher. I don't think it is nice to remove it manually from this system table.
Does someone has ideas ?
thanks in advance, Marco
What you are describing is a feature called "compensation". That is, if
we try and apply a row and an error occurs replication will compensate
for that error and delete the row in order to get both sides in sync
with each other.
This is not always the desired effect. There is a fix which will allow
you to control whether compensation occurs.
See:
http://support.microsoft.com/?kbid=828637
Please don't delete manually out of tombstone or contents
Hope this helps,
Reinout Hillmann
SQL Server Product Unit
This posting is provided "AS IS" with no warranties, and confers no rights.
Marco wrote:
> Hello,
> I'm writing a custom COM conflict resolver to handle replication conflicts.
> I'm having problems with handling a conflict caused by the violation of a foreign key constraint.
> Situation:
> Publisher deletes item X
> Concurrently the subsriber creates item Y with a foreign key to item X
> My goal of the replication is, that the deletion of item X on the publisher is rollbacked and that item Y is replicated to the publisher. So after replication, the publisher and subscriber contain both item X and item Y.
> The replication procedure is:
> In Upload phase, a conflict (REPOLEChange_UploadInsertFailed) occurs, because item Y cannot be created on the publisher (caused by foreign key violation). SQLServer adds item Y automatically to the MSMerge_tombstone table of the publisher.
> In Download phase, the change event 'REPOLEChange_PublisherSystemDelete' happens, because the system tries to delete item Y at the subscriber.
> My questions are:
> How can I prevent item Y from being deleted on the subscriber?
> How can I achieve that item Y is created at the publisher?
> The problem is that item Y is put in the MSMerge_tombstone table of the publisher. I don't think it is nice to remove it manually from this system table.
> Does someone has ideas ?
> thanks in advance, Marco
>
|||thanks!
I've tried the hotfix and it gives the desired result.
Marco
Handling autogenerated PK/FK conflicts in merge replication
We use autogenerated primary keys in most of our tables. Some of these keys are also foreign keys in other tables. Right now there is only 1 database sever at a central location. But now there is a need to have multiple database servers at different locations. Data from these remote sites needs to be replicated to the central server. Some data would also distribute from central server to selected remote sites.
If I could resdesign, I would have choosen something like GUIDs for the primary keys or combination of something like ServerName and AutoGenerated number as a combined key. But that's not possible right now. How do I handle merge replication conflicts in this case?
I am looking for some pointers as to how to handle this case. If it were just simple table with 1 primary key, that would be easy as I can throw the primary key on remote server and let the central server create a new key when data is inserted. But in my case, a single table can be related to 5 or more other tables through these autogenerated keys. Any help is much appreciated.
GUID would be easiest however you can also make a composite key with ID and hostname. That way the key will always be unique.
Would not recommend conflict resolver for the primary key, you will end up with a mess eventually i think.
Martin
|||you can also use identity ranges for subscribers so they don't conflict when they upload back to the publisher server.
Handling autogenerated PK/FK conflicts in merge replication
We use autogenerated primary keys in most of our tables. Some of these keys are also foreign keys in other tables. Right now there is only 1 database sever at a central location. But now there is a need to have multiple database servers at different locations. Data from these remote sites needs to be replicated to the central server. Some data would also distribute from central server to selected remote sites.
If I could resdesign, I would have choosen something like GUIDs for the primary keys or combination of something like ServerName and AutoGenerated number as a combined key. But that's not possible right now. How do I handle merge replication conflicts in this case?
I am looking for some pointers as to how to handle this case. If it were just simple table with 1 primary key, that would be easy as I can throw the primary key on remote server and let the central server create a new key when data is inserted. But in my case, a single table can be related to 5 or more other tables through these autogenerated keys. Any help is much appreciated.
GUID would be easiest however you can also make a composite key with ID and hostname. That way the key will always be unique.
Would not recommend conflict resolver for the primary key, you will end up with a mess eventually i think.
Martin
|||you can also use identity ranges for subscribers so they don't conflict when they upload back to the publisher server.
Sunday, February 19, 2012
HA clustering with replication
serving data partitioning.
What are the options to set up HA clustering and which one is the most
economic one?
Appreciate any help.
I think you want a cluster. Here are a few guides for you:
General Clustering information
http://www.microsoft.com/ntserver/su...tering_faq.asp
http://www.microsoft.com/downloads/d...displaylang=en
http://www.microsoft.com/windows2000...ustersteps.asp
SQL Clustering
http://www.microsoft.com/technet/pro.../failclus.mspx
http://www.microsoft.com/usa/webcasts/ondemand/1883.asp (Webcast)
Now for the price issue. HA and price never agree. You can either have one
or the other.
My website also has a ton of clustering information on it.
Cheers,
Rod
MVP - Windows Server - Clustering
http://www.nw-america.com - Clustering
"Wen Chang" <WenChang@.discussions.microsoft.com> wrote in message
news:235AAE4A-F675-4700-8795-8E9DD7CA5DC4@.microsoft.com...
> Node A and Node B (sql server 2000) are being replicated to each other
> serving data partitioning.
> What are the options to set up HA clustering and which one is the most
> economic one?
> Appreciate any help.