Showing posts with label writing. Show all posts
Showing posts with label writing. Show all posts

Friday, March 30, 2012

Having Stored Procedure Problem - SQLExpress 2005

Hi.

I am writing a stored procedure for executing search
and returning results. The Sproc is a bit complicated and has
gotten away from me, now I can't understand why I am getting
bad results.

Its long and detailed so, before I dump code and get yelled at,
I am wondering if it is OK to post a large SPROC here for general
debugging help?

Thanks.

Absolutely that's ok.

It would also be helpful if you posted any ddl, sample data, and desired results as well.

|||

Yes, but ...

(And thanks for asking in advance!)

Before you do that, let me offer the following suggestions.

Often, it is best to 'build' a complicated procedure the way you build a structure, 'brick by brick'. Start out with a small part, get that working. Then add another small part, get that working.

When you are baffled, put in PRINT statements, printing the parameter and variable values, and even location in the code (for IF/WHILE switches) for verification.

And when you are ready to drop the whole thing here, please post the DDL for all associated tables, fabricate some sample data for those tables (in the form of INSERT statements), and a clear explanation of and example of desired output. See this link for help in putting it all together. The less 'set up' work we have to do, the more likely you are going to have folks tackle your problem and help you.

By going to that effort to prepare your 'presentation', the volunteers here can more easily get involved with your problem. If if is necessary to take the time to create a simulated environment just to try to help you, you will radically reduce the number of folks willing to make the time investment.

All that said, it is a great group of folks here, willing to help those willing to be helped.

|||

Concur 100% with Dale. Much better to give too much information. What will get you "yelled" at is asking a quesiton like:

My stored proc won't work. Why?

And yes, that is far too true of an example Smile

Friday, March 23, 2012

Have a problem with UpdateCommand

Hello everyone,

I am a bit new to ASP .Net so forgive me, if I dont understand something right off.

I am writing a page that gets code from the SQL database and puts it into a GridView. To get the information I am using SqlDataSource. I cant post an error message because I am running this off remote server but though testing I figure that it crashes when I add

Analysis = @.Analysis

line into the UpdateCommand. First I thought that maybe my parameters were not read correcty but when I tried something like

Analysis = 6

it worked. Then I tried to replace CQNo=@.CQNo with

CQNo = @.Analysis

and it also worked.

I am very much puzzed at this. In SQL database CQNo is varchar, WorkDate is DateTime and Analysis is Money type.

Can someone please help, I am out of ideas. Thanks!

<asp:SqlDataSourceID="myEfforts"runat="server"SelectCommand="SELECT SNo, CQNo, WorkDate, Analysis, Design, Coding, Testing, DesRev, CodeRev, PeerRev, SysTest, PostInstall, Others, TotEff FROM Effort WHERE EmpId = @.EmpId ORDER BY WorkDate DESC"DeleteCommand="DELETE FROM Effort WHERE SNo=@.SNo"UpdateCommand="UPDATE Effort SET CQNo = @.CQNo, WorkDate=@.WorkDate Analysis=@.Analysis WHERE SNo=@.SNo"><SelectParameters><asp:SessionParameterDefaultValue=""Name="EmpId"SessionField="PR_EmpIDVal"Type="String"/></SelectParameters>Looks like you are missing a comma after @.workdate and before Analysis? Or was that a type during pasting the code here?|||That was a typo when I was pasting the code in here. Sorry|||

I tried this and it works.

<asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="SNo" DataSourceID="SqlDataSource1">
<Columns>
<asp:CommandField ShowEditButton="True" />
<asp:BoundField DataField="SNo" HeaderText="SNo" ReadOnly="True" SortExpression="SNo" />
<asp:BoundField DataField="CQNo" HeaderText="CQNo" SortExpression="CQNo" />
<asp:BoundField DataField="WorkDate" HeaderText="WorkDate" SortExpression="WorkDate" />
<asp:BoundField DataField="Analysis" HeaderText="Analysis" SortExpression="Analysis" />
</Columns>
</asp:GridView>
<asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:MSDN_forumConnectionString %>"
DeleteCommand="DELETE FROM [effort] WHERE [SNo] = @.SNo"
SelectCommand="SELECT [SNo], [CQNo], [WorkDate], [Analysis] FROM [effort]"
UpdateCommand="UPDATE [effort] SET [CQNo] = @.CQNo, [WorkDate] = @.WorkDate, [Analysis] = @.Analysis WHERE [SNo] = @.SNo">
<DeleteParameters>
<asp:Parameter Name="SNo" Type="Int32" />
</DeleteParameters>
<UpdateParameters>
<asp:Parameter Name="CQNo" Type="String" />
<asp:Parameter Name="WorkDate" Type="DateTime" />
<asp:Parameter Name="Analysis" Type="Decimal" />
<asp:Parameter Name="SNo" Type="Int32" />
</UpdateParameters>

</asp:SqlDataSource>

|||Thanks a lot of taking the time, let me try this.|||

It worked! Thanks a lot!!!! Can you please explain what was causing the error (what I was doing wrong)?

Thanks a lot!

|||

I didn't see your whole page so my guess is the problem lies in the updateparameter part. A quick trick for this, you can always drag and drop a gridview to hook up with your table of interest in your database and ask for generating all commands (insert, update, delete). You will get a working copy to modify. Remeber assign a primary key for the table.

Glad you got it to work now.

Wednesday, March 7, 2012

Hard Stored Procedure?

I'm writing a stored procedure that will look for 'string' in the 'n'th
column. For instance,
exec QueryTable 1,'aString'
...should look for the string 'aString' in the first column of a specific
table. Does anyone know how to do this in a generic way, so that I don't hav
e
a big IF statement where I write the query once for each column?Try this, salt to taste (and add better exception handling):
create procedure ap_get_rows
@.table sysname,
@.field int,
@.value varchar(4000)
as
declare @.fieldname sysname
declare @.sql varchar(4000)
select @.fieldname = [name]
from syscolumns
where id = object_id(@.table) and colid = @.field
set @.sql = 'select * from [' + @.table + '] where [' + @.fieldname + '] = '''
+ @.value + ''''
print @.sql
exec (@.sql)
go
Mike|||courtesy of Steve Kass:
http://www.users.drew.edu/skass/sql...lTables.sql.txt
-oj
"Ken" <Ken@.discussions.microsoft.com> wrote in message
news:EAF72EFC-B55A-4F56-B6AF-DF48BB05177A@.microsoft.com...
> I'm writing a stored procedure that will look for 'string' in the 'n'th
> column. For instance,
> exec QueryTable 1,'aString'
> ...should look for the string 'aString' in the first column of a specific
> table. Does anyone know how to do this in a generic way, so that I don't
> have
> a big IF statement where I write the query once for each column?|||I forgot the disclaimer: "This stored procedure uses dynamic SQL which is
known to perform less efficiently than compiled SQL statements. Use with
caution especially in performance sensitive operations."
"Mike Jansen" <mjansen_nntp@.mail.com> wrote in message
news:%23LXQyFeRFHA.4028@.tk2msftngp13.phx.gbl...
> Try this, salt to taste (and add better exception handling):
>
> create procedure ap_get_rows
> @.table sysname,
> @.field int,
> @.value varchar(4000)
> as
> declare @.fieldname sysname
> declare @.sql varchar(4000)
> select @.fieldname = [name]
> from syscolumns
> where id = object_id(@.table) and colid = @.field
> set @.sql = 'select * from [' + @.table + '] where [' + @.fieldname + '] =
'''
> + @.value + ''''
> print @.sql
> exec (@.sql)
> go
>
> Mike
>|||You have to use dynamic sql.
Example:
use northwind
go
create procedure dbo.proc1
@.ts sysname = N'dbo',
@.tn sysname,
@.ordinal_position int,
@.value varchar(50)
as
set nocount on
declare @.sql nvarchar(4000)
declare @.cn sysname
select
@.cn = column_name
from
information_schema.columns
where
table_schema = @.ts
and table_name = @.tn
and ordinal_position = @.ordinal_position
if @.cn is not null
begin
set @.sql = N'select ' + quotename(@.cn) + N' from ' + quotename(@.ts) + '.' +
quotename(@.tn) + N' where ' + quotename(@.cn) + N' like ''%' + replace(@.value
,
'''', ''') + N'%'''
print @.sql
exec sp_executesql @.sql
end
go
exec dbo.proc1 @.tn = N'customers', @.ordinal_position = 2, @.value = 'La
maison d''Asie'
go
exec dbo.proc1 @.tn = N'employees', @.ordinal_position = 4, @.value =
'Representative'
go
drop procedure proc1
go
The Curse and Blessings of Dynamic SQL
http://www.sommarskog.se/dynamic_sql.html
AMB
"Ken" wrote:

> I'm writing a stored procedure that will look for 'string' in the 'n'th
> column. For instance,
> exec QueryTable 1,'aString'
> ...should look for the string 'aString' in the first column of a specific
> table. Does anyone know how to do this in a generic way, so that I don't h
ave
> a big IF statement where I write the query once for each column?

Monday, February 27, 2012

Handling very large XML result sets

I am writing a .NET based application to create large XML data files using
SQLXML classes and FOR XML EXPLICIT queries. What are some strategies I can
use to break up and process these large result sets? The overhead of issuing
multiple queries by breaking them up via WHERE clause filters isn’t the way I
want to go since my queries are very large and take significant time to
process within SQL server.
I am currently experiencing out of memory exceptions on some larger result
sets (~50-60 Mbytes total XML file size). My first attempt was using
SqlXmlCommand.ExecuteXmlReader and an XmlDocument via this snippet of code:
XmlReader xr = forXMLCommand.ExecuteXmlReader();
XmlDocument xd = new XmlDocument();
xd.Load(xr);
This throws a System.OutOfMemoryException on the call to ExecuteXmlReader
when the result set gets very large.
I also tried using SqlXmlCommand.ExecuteStream thinking I could read a
buffer of chars at a time to process these large result sets but this also
resulted in a System.OutOfMemoryException on the call to ExecuteStream:
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.Peek() >= 0)
{
c = new char[10000];
intnumRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
I have tried running my application on two different systems one with 1G of
main memory and the other a Win2K3 server with 8G of main memory. Both
systems seem to run out of memory at the same 50-60 Mb limit) Are there any
..NET memory settings I can tweak to give my .NET application more memory?
Thanks for your suggestions and ideas,
Scott
The XmlReader is a streaming interface which should not run out of memory
via the SqlXmlCommand.ExecuteStream method.
Loading into an XmlDocument however will cache the entire document into
memory.
Can you remove the following two lines from your repro and see if you are
still having the problem:
XmlDocument xd = new XmlDocument();
xd.Load(xr);
Thanks -
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad
|||Andrew,
That was exactly my thought as well, but ExecuteStream is throwing an
OutOfMemoryException. I am NOTcalling XmlDocument .Load in the code that
uses ExecuteStream.
Here is my full method I am using:
private void ExecuteSQLXMLCommandExecuteStream()
{
try
{
SqlXmlCommandforXMLCommand = new SqlXmlCommand("Provider=SQLOLEDB;DATA
SOURCE=Gibraltar;Initial Catalog=RDCModel;User ID=sa;Password=XXXX");
forXMLCommand.CommandType = SqlXmlCommandType.Sql;
StreamReadersr1 = new StreamReader(mQueryFileName);
stringquery = sr1.ReadToEnd();
sr1.Close();
query = query.Replace("\r\n", " ");
query = query.Replace("\t", " ");
forXMLCommand.CommandText = query;
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.Peek() >= 0)
{
c = new char[10000];
intnumRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
sw3.Close();
}
catch (SqlXmlException ex)
{
ex.ErrorStream.Position = 0;
string sqlErrorString;
sqlErrorString = new StreamReader(ex.ErrorStream).ReadToEnd();
Console.WriteLine(sqlErrorString);
RDCUtilities.WriteToLog(sqlErrorString);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
RDCUtilities.WriteToLog(ex.Message);
}
""Andrew Conrad"" wrote:

> The XmlReader is a streaming interface which should not run out of memory
> via the SqlXmlCommand.ExecuteStream method.
> Loading into an XmlDocument however will cache the entire document into
> memory.
> Can you remove the following two lines from your repro and see if you are
> still having the problem:
> XmlDocument xd = new XmlDocument();
> xd.Load(xr);
> Thanks -
> Andrew Conrad
> Microsoft Corp
> http://blogs.msdn.com/aconrad
>
|||Try using SqlXmlCommand.ExecuteToStream() instead of ExecuteStream.
Because of some technical limitations with COM interop, ExecuteStream
caches results.
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad

Handling very large XML result sets

I am writing a .NET based application to create large XML data files using
SQLXML classes and FOR XML EXPLICIT queries. What are some strategies I can
use to break up and process these large result sets? The overhead of issuin
g
multiple queries by breaking them up via WHERE clause filters isn’t the wa
y I
want to go since my queries are very large and take significant time to
process within SQL server.
I am currently experiencing out of memory exceptions on some larger result
sets (~50-60 Mbytes total XML file size). My first attempt was using
SqlXmlCommand.ExecuteXmlReader and an XmlDocument via this snippet of code:
XmlReader xr = forXMLCommand.ExecuteXmlReader();
XmlDocument xd = new XmlDocument();
xd.Load(xr);
This throws a System.OutOfMemoryException on the call to ExecuteXmlReader
when the result set gets very large.
I also tried using SqlXmlCommand.ExecuteStream thinking I could read a
buffer of chars at a time to process these large result sets but this also
resulted in a System.OutOfMemoryException on the call to ExecuteStream:
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.P() >= 0)
{
c = new char[10000];
int numRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
I have tried running my application on two different systems one with 1G of
main memory and the other a Win2K3 server with 8G of main memory. Both
systems seem to run out of memory at the same 50-60 Mb limit) Are there any
.NET memory settings I can tweak to give my .NET application more memory?
Thanks for your suggestions and ideas,
ScottThe XmlReader is a streaming interface which should not run out of memory
via the SqlXmlCommand.ExecuteStream method.
Loading into an XmlDocument however will cache the entire document into
memory.
Can you remove the following two lines from your repro and see if you are
still having the problem:
XmlDocument xd = new XmlDocument();
xd.Load(xr);
Thanks -
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad|||Andrew,
That was exactly my thought as well, but ExecuteStream is throwing an
OutOfMemoryException. I am NOTcalling XmlDocument .Load in the code that
uses ExecuteStream.
Here is my full method I am using:
private void ExecuteSQLXMLCommandExecuteStream()
{
try
{
SqlXmlCommand forXMLCommand = new SqlXmlCommand("Provider=SQLOLEDB;DATA
SOURCE=Gibraltar;Initial Catalog=RDCModel;User ID=sa;Password=XXXX");
forXMLCommand.CommandType = SqlXmlCommandType.Sql;
StreamReader sr1 = new StreamReader(mQueryFileName);
string query = sr1.ReadToEnd();
sr1.Close();
query = query.Replace("\r\n", " ");
query = query.Replace("\t", " ");
forXMLCommand.CommandText = query;
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.P() >= 0)
{
c = new char[10000];
int numRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
sw3.Close();
}
catch (SqlXmlException ex)
{
ex.ErrorStream.Position = 0;
string sqlErrorString;
sqlErrorString = new StreamReader(ex.ErrorStream).ReadToEnd();
Console.WriteLine(sqlErrorString);
RDCUtilities.WriteToLog(sqlErrorString);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
RDCUtilities.WriteToLog(ex.Message);
}
""Andrew Conrad"" wrote:

> The XmlReader is a streaming interface which should not run out of memory
> via the SqlXmlCommand.ExecuteStream method.
> Loading into an XmlDocument however will cache the entire document into
> memory.
> Can you remove the following two lines from your repro and see if you are
> still having the problem:
> XmlDocument xd = new XmlDocument();
> xd.Load(xr);
> Thanks -
> Andrew Conrad
> Microsoft Corp
> http://blogs.msdn.com/aconrad
>|||Try using SqlXmlCommand.ExecuteToStream() instead of ExecuteStream.
Because of some technical limitations with COM interop, ExecuteStream
caches results.
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad

handling failover at the application level with Sql2005 Mirroring

We're in the process of writing an application that we want to work with a
synchronous mirrored Sql 2005 database and a witness.
As I currently understand it the best way to handle failover with this setup
is to catch the server not available error and then retry the operation.
Any open transaction will be rolled back (in effect) so we need to retry the
entire transaction.
We have complicated transactional processes at our business logic layer, so
we have quite a few places where we begin a transaction, carry out a number
of operations that may involve multiple database calls, and then commit or
rollback on error.
In order to add this retry functionality it looks like I need to add a
try-catch around every call at the business logic layer as it needs to wrap
the transaction. This is possible, but messy.
Is there a better way of doing this? Ideally the handing of the retry
should be at the data access layer, but as the transaction in progress when
the failover happens will be lost it doesn't look possible to handle this in
individual data commands.
Any ideas?
Keith Henry
This isn't unique to Database Mirroring. You would have to do that in every
case where you are dealing with a failover configuration. There is no logic
that basically says "retry". You have to code this all yourself.
When the mirror fails over, you will get a disconnect and any transactions
in flight will be rolled back. The only thing your applications can take
advantage of if they are using the new MDAC library. In this situation,
there is code carried that will cache both the principal and mirror. Your
application can simply reconnect to the principal and the MDAC layer will
transparently redirect the connection and requests to the mirror.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Keith Henry" <k.henry@.link-hrsystems.com> wrote in message
news:u2mk5jjLGHA.984@.tk2msftngp13.phx.gbl...
> We're in the process of writing an application that we want to work with a
> synchronous mirrored Sql 2005 database and a witness.
> As I currently understand it the best way to handle failover with this
> setup is to catch the server not available error and then retry the
> operation. Any open transaction will be rolled back (in effect) so we need
> to retry the entire transaction.
> We have complicated transactional processes at our business logic layer,
> so we have quite a few places where we begin a transaction, carry out a
> number of operations that may involve multiple database calls, and then
> commit or rollback on error.
>
> In order to add this retry functionality it looks like I need to add a
> try-catch around every call at the business logic layer as it needs to
> wrap the transaction. This is possible, but messy.
>
> Is there a better way of doing this? Ideally the handing of the retry
> should be at the data access layer, but as the transaction in progress
> when the failover happens will be lost it doesn't look possible to handle
> this in individual data commands.
> Any ideas?
> Keith Henry
>
|||Thanks,
I guess we'll have to go with the retry from the top level then.
Keith Henry
"Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
news:udYhiKnLGHA.208@.tk2msftngp13.phx.gbl...
> This isn't unique to Database Mirroring. You would have to do that in
> every case where you are dealing with a failover configuration. There is
> no logic that basically says "retry". You have to code this all yourself.
> When the mirror fails over, you will get a disconnect and any transactions
> in flight will be rolled back. The only thing your applications can take
> advantage of if they are using the new MDAC library. In this situation,
> there is code carried that will cache both the principal and mirror. Your
> application can simply reconnect to the principal and the MDAC layer will
> transparently redirect the connection and requests to the mirror.
> --
> Mike
> http://www.solidqualitylearning.com
> Disclaimer: This communication is an original work and represents my sole
> views on the subject. It does not represent the views of any other person
> or entity either by inference or direct reference.
>
> "Keith Henry" <k.henry@.link-hrsystems.com> wrote in message
> news:u2mk5jjLGHA.984@.tk2msftngp13.phx.gbl...
>

Friday, February 24, 2012

Handling conflicts with a Custom COM resolver

Hello,
I'm writing a custom COM conflict resolver to handle replication conflicts.
I'm having problems with handling a conflict caused by the violation of a foreign key constraint.
Situation:
Publisher deletes item X
Concurrently the subsriber creates item Y with a foreign key to item X
My goal of the replication is, that the deletion of item X on the publisher is rollbacked and that item Y is replicated to the publisher. So after replication, the publisher and subscriber contain both item X and item Y.
The replication procedure is:
In Upload phase, a conflict (REPOLEChange_UploadInsertFailed) occurs, because item Y cannot be created on the publisher (caused by foreign key violation). SQLServer adds item Y automatically to the MSMerge_tombstone table of the publisher.
In Download phase, the change event 'REPOLEChange_PublisherSystemDelete' happens, because the system tries to delete item Y at the subscriber.
My questions are:
How can I prevent item Y from being deleted on the subscriber?
How can I achieve that item Y is created at the publisher?
The problem is that item Y is put in the MSMerge_tombstone table of the publisher. I don't think it is nice to remove it manually from this system table.
Does someone has ideas ?
thanks in advance, Marco
What you are describing is a feature called "compensation". That is, if
we try and apply a row and an error occurs replication will compensate
for that error and delete the row in order to get both sides in sync
with each other.
This is not always the desired effect. There is a fix which will allow
you to control whether compensation occurs.
See:
http://support.microsoft.com/?kbid=828637
Please don't delete manually out of tombstone or contents
Hope this helps,
Reinout Hillmann
SQL Server Product Unit
This posting is provided "AS IS" with no warranties, and confers no rights.
Marco wrote:
> Hello,
> I'm writing a custom COM conflict resolver to handle replication conflicts.
> I'm having problems with handling a conflict caused by the violation of a foreign key constraint.
> Situation:
> Publisher deletes item X
> Concurrently the subsriber creates item Y with a foreign key to item X
> My goal of the replication is, that the deletion of item X on the publisher is rollbacked and that item Y is replicated to the publisher. So after replication, the publisher and subscriber contain both item X and item Y.
> The replication procedure is:
> In Upload phase, a conflict (REPOLEChange_UploadInsertFailed) occurs, because item Y cannot be created on the publisher (caused by foreign key violation). SQLServer adds item Y automatically to the MSMerge_tombstone table of the publisher.
> In Download phase, the change event 'REPOLEChange_PublisherSystemDelete' happens, because the system tries to delete item Y at the subscriber.
> My questions are:
> How can I prevent item Y from being deleted on the subscriber?
> How can I achieve that item Y is created at the publisher?
> The problem is that item Y is put in the MSMerge_tombstone table of the publisher. I don't think it is nice to remove it manually from this system table.
> Does someone has ideas ?
> thanks in advance, Marco
>
|||thanks!
I've tried the hotfix and it gives the desired result.
Marco

handling columns with multiple values

I am writing a stored procedure that needs a access individual entries in a column with multiple entries delimited by a comma(yeah i know, not 1st NF) . Like this:

Key

NotANormalizedCol

1

1324, 5124, 5435,5467

2

423, 23, 5345

3

52334, 53443, 1224

4

12, 4, 1243,66

is there a function that returns a substring given a delimiter character? the only substring returning function that i found are the LEFT and RIGHT that returns fixed length substring.

I am pretty new to this, so I apologize if this is a trivial questions

Look at http://www.sommarskog.se/arrays-in-sql.html|||Hi,

I once wrote a function for that which can be found here in this post: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=320221&SiteID=1

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

Sunday, February 19, 2012

HA requirements for Devs

What should Operations ask from Devs to consider when writing their code to
handle High Availability ?
On Feb 12, 10:35 am, "Hassan" <has...@.hotmail.com> wrote:
> What should Operations ask from Devs to consider when writing their code to
> handle High Availability ?
Make sure that they dont hard code the server name or the IP ... The
code should pick up theses variables from an ini file... cause when
the primary server fails & you want it to point to the
secondary ..... all you need to do is to inform the users to change
the IP or the server name to start using the application ...
|||If the applications are expected to gracefully handle a failover, developers
will need to include the appropriate retry code.
Hope this helps.
Dan Guzman
SQL Server MVP
"Hassan" <hassan@.hotmail.com> wrote in message
news:uq9upemTHHA.2256@.TK2MSFTNGP02.phx.gbl...
> What should Operations ask from Devs to consider when writing their code
> to handle High Availability ?
>
>
|||"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:074A70E2-830C-48E2-93E5-330E8F24E84E@.microsoft.com...
> If the applications are expected to gracefully handle a failover,
> developers will need to include the appropriate retry code.
>
Most HA failover scenarios appear like a brief server outage to the
application. Retrying transactions and hiding the fact that an outage
occurred is sometimes an option, but it's difficult to code and test, and if
the outage is not _very_ brief this strategy will fail anyway. More
critical is making sure that the application resumes functioning correctly
after a brief server outage.
I would say it's acceptable to fail all the pending transactions and give
the users an error message. But when the users try again after the outage,
the application needs to work. For instance if the application holds open
connections to the database, it needs to close and re-open the connections
on certain errors. Especially important is ensuring that any server
applications or services resume gracefully, and don't require administrative
intervention.
David
|||Thanks for adding this David. I agree that an error message during the is
failover is usually acceptable in the front-end and that it particularly
important that middle-tier service apps handle the brief failover outage
without manual intervention in order to maximize availability. Retrying
transactions can be tricky unless the app is architected with this in mind.
Hope this helps.
Dan Guzman
SQL Server MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:OOJgzNrTHHA.600@.TK2MSFTNGP05.phx.gbl...
>
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:074A70E2-830C-48E2-93E5-330E8F24E84E@.microsoft.com...
>
> Most HA failover scenarios appear like a brief server outage to the
> application. Retrying transactions and hiding the fact that an outage
> occurred is sometimes an option, but it's difficult to code and test, and
> if the outage is not _very_ brief this strategy will fail anyway. More
> critical is making sure that the application resumes functioning correctly
> after a brief server outage.
> I would say it's acceptable to fail all the pending transactions and give
> the users an error message. But when the users try again after the
> outage, the application needs to work. For instance if the application
> holds open connections to the database, it needs to close and re-open the
> connections on certain errors. Especially important is ensuring that any
> server applications or services resume gracefully, and don't require
> administrative intervention.
> David

HA requirements for Devs

What should Operations ask from Devs to consider when writing their code to
handle High Availability ?On Feb 12, 10:35 am, "Hassan" <has...@.hotmail.com> wrote:
> What should Operations ask from Devs to consider when writing their code t
o
> handle High Availability ?
Make sure that they dont hard code the server name or the IP ... The
code should pick up theses variables from an ini file... cause when
the primary server fails & you want it to point to the
secondary ..... all you need to do is to inform the users to change
the IP or the server name to start using the application ...|||If the applications are expected to gracefully handle a failover, developers
will need to include the appropriate retry code.
Hope this helps.
Dan Guzman
SQL Server MVP
"Hassan" <hassan@.hotmail.com> wrote in message
news:uq9upemTHHA.2256@.TK2MSFTNGP02.phx.gbl...
> What should Operations ask from Devs to consider when writing their code
> to handle High Availability ?
>
>|||"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:074A70E2-830C-48E2-93E5-330E8F24E84E@.microsoft.com...
> If the applications are expected to gracefully handle a failover,
> developers will need to include the appropriate retry code.
>
Most HA failover scenarios appear like a brief server outage to the
application. Retrying transactions and hiding the fact that an outage
occurred is sometimes an option, but it's difficult to code and test, and if
the outage is not _very_ brief this strategy will fail anyway. More
critical is making sure that the application resumes functioning correctly
after a brief server outage.
I would say it's acceptable to fail all the pending transactions and give
the users an error message. But when the users try again after the outage,
the application needs to work. For instance if the application holds open
connections to the database, it needs to close and re-open the connections
on certain errors. Especially important is ensuring that any server
applications or services resume gracefully, and don't require administrative
intervention.
David|||Thanks for adding this David. I agree that an error message during the is
failover is usually acceptable in the front-end and that it particularly
important that middle-tier service apps handle the brief failover outage
without manual intervention in order to maximize availability. Retrying
transactions can be tricky unless the app is architected with this in mind.
Hope this helps.
Dan Guzman
SQL Server MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:OOJgzNrTHHA.600@.TK2MSFTNGP05.phx.gbl...
>
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:074A70E2-830C-48E2-93E5-330E8F24E84E@.microsoft.com...
>
> Most HA failover scenarios appear like a brief server outage to the
> application. Retrying transactions and hiding the fact that an outage
> occurred is sometimes an option, but it's difficult to code and test, and
> if the outage is not _very_ brief this strategy will fail anyway. More
> critical is making sure that the application resumes functioning correctly
> after a brief server outage.
> I would say it's acceptable to fail all the pending transactions and give
> the users an error message. But when the users try again after the
> outage, the application needs to work. For instance if the application
> holds open connections to the database, it needs to close and re-open the
> connections on certain errors. Especially important is ensuring that any
> server applications or services resume gracefully, and don't require
> administrative intervention.
> David

HA requirements for Devs

What should Operations ask from Devs to consider when writing their code to
handle High Availability ?On Feb 12, 10:35 am, "Hassan" <has...@.hotmail.com> wrote:
> What should Operations ask from Devs to consider when writing their code to
> handle High Availability ?
Make sure that they dont hard code the server name or the IP ... The
code should pick up theses variables from an ini file... cause when
the primary server fails & you want it to point to the
secondary ..... all you need to do is to inform the users to change
the IP or the server name to start using the application ...|||If the applications are expected to gracefully handle a failover, developers
will need to include the appropriate retry code.
Hope this helps.
Dan Guzman
SQL Server MVP
"Hassan" <hassan@.hotmail.com> wrote in message
news:uq9upemTHHA.2256@.TK2MSFTNGP02.phx.gbl...
> What should Operations ask from Devs to consider when writing their code
> to handle High Availability ?
>
>|||"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:074A70E2-830C-48E2-93E5-330E8F24E84E@.microsoft.com...
> If the applications are expected to gracefully handle a failover,
> developers will need to include the appropriate retry code.
>
Most HA failover scenarios appear like a brief server outage to the
application. Retrying transactions and hiding the fact that an outage
occurred is sometimes an option, but it's difficult to code and test, and if
the outage is not _very_ brief this strategy will fail anyway. More
critical is making sure that the application resumes functioning correctly
after a brief server outage.
I would say it's acceptable to fail all the pending transactions and give
the users an error message. But when the users try again after the outage,
the application needs to work. For instance if the application holds open
connections to the database, it needs to close and re-open the connections
on certain errors. Especially important is ensuring that any server
applications or services resume gracefully, and don't require administrative
intervention.
David|||Thanks for adding this David. I agree that an error message during the is
failover is usually acceptable in the front-end and that it particularly
important that middle-tier service apps handle the brief failover outage
without manual intervention in order to maximize availability. Retrying
transactions can be tricky unless the app is architected with this in mind.
Hope this helps.
Dan Guzman
SQL Server MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:OOJgzNrTHHA.600@.TK2MSFTNGP05.phx.gbl...
>
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:074A70E2-830C-48E2-93E5-330E8F24E84E@.microsoft.com...
>> If the applications are expected to gracefully handle a failover,
>> developers will need to include the appropriate retry code.
>>
>
> Most HA failover scenarios appear like a brief server outage to the
> application. Retrying transactions and hiding the fact that an outage
> occurred is sometimes an option, but it's difficult to code and test, and
> if the outage is not _very_ brief this strategy will fail anyway. More
> critical is making sure that the application resumes functioning correctly
> after a brief server outage.
> I would say it's acceptable to fail all the pending transactions and give
> the users an error message. But when the users try again after the
> outage, the application needs to work. For instance if the application
> holds open connections to the database, it needs to close and re-open the
> connections on certain errors. Especially important is ensuring that any
> server applications or services resume gracefully, and don't require
> administrative intervention.
> David