Monday, February 27, 2012

Hands-on labs feedback

Whoever decided - http://www.mcwtech.com/? - to lock the material on these
labs [http://msdn.microsoft.com/sql/2005/...bs/default.aspx] beyond c
ompare
has made one pissed off dev out of me...
For god's sake you can't even select the damn queries off the freaking PDFs.
.
Aaahhgggrrr!!
I think i better call it a stinking day.Which one of them is causing you problems? The lab manuals I get from there
are Word doc's and I have no problem coping the text/queries.
Regards
Steen
Francisco Lopez wrote:
> Whoever decided - http://www.mcwtech.com/? - to lock the material on
> these labs [http://msdn.microsoft.com/sql/2005/...bs/default.aspx]
> beyond compare has made one pissed off dev out of me...
> For god's sake you can't even select the damn queries off the
> freaking PDFs...
> Aaahhgggrrr!!
> I think i better call it a stinking day.|||Hello Steen,
I was playing with the SQL Server 2005 Beta 2 Resource Kit Hands-On Labs
DVD I got at a conference.
Maybe I should try downloading from the website.
Thanks.

Handling world time with SQL and ASP.NET

Hi,

I would like to capture date time in my SQL during a certain operation. This piece of data will be displayed to an ASP.NET page.

When displaying the date time, I want it converted to the local time zone equivalent as set in the user's operating system.

What setup do I need on the SQL and the coding in the ASP.NET page?

Thanks.The following MSDN article may help you to understand date & time and also avoid inherant problems when dealing with different time zones.

Best Practices Using DateTime in the .NET

Hope this helps

Regards

Wayne Phipps|||Hi Wayne,

Thanks. That's a great article on issues related to saving and computing date time when daylight saving is relevant.

I'm also looking for saving date time in one time zone, and then display the date time relevant to the target user's system. I managed to find a web site that's dedicated to addressing this issue, SimpleTimeZone .NET Framework Class (http://www.michaelbrumm.com/simpletimezone.html). However I have not evaluated the .NET class concerned.

Thanks again for your reply.

Handling wildcard characters in query string

Hi

First interaction to the forum.
My Query is :

I had a User Management module in my application where I created a user with name

`~!@.#$@.%^&*()[_]+|}{":?><-=\[[]];',./

Now I have a functionality to search for the user existing. For that give the search string or a single character and it finds out all the records containing the character.

How do I go about it as the SP i created for it gives correct results except the following

1. Search for % - Gives all record
2. Search for _ - Gives all records
3. Search for [ - Gives NO record
4. Search for the whole string - Gives NO Record

I handeled a few issues

1. replaced [ by [[]
2. replaced _ by [_]

So issues 2 & 3 are resolved.

Tried replacing % by [%] but did not work

Could someone plz help

Thanks in advance
AshutoshYou could search the string with CHARINDEX function instead of using LIKE.

HTH|||You could search the string with CHARINDEX function instead of using LIKE.

HTH

Sorry but did not get your point as how this will help me get through

Ashutosh|||What is your current where predicate?|||What is your current where predicate?

WHERE FName like '%`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./%'

In the front end code I handle it as input params come with %...% qualifiers and ' is replaced by ''

On the sql side i replace _ by [_] and [ by [[]

Ashutosh|||Why use LIKE? Is there a reason you can't search for the literal value?

WHERE FName = '`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./'
Otherwise, if you are searching for a string portion, use charindex
WHERE CHARINDEX('`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./', FName) >0
HTH

EDIT - you would need to remove the code adding the extra square brackets as you are searching for literals now.|||Why use LIKE? Is there a reason you can't search for the literal value?

WHERE FName = '`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./'
Otherwise, if you are searching for a string portion, use charindex
WHERE CHARINDEX('`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./', FName) >0
HTH

EDIT - you would need to remove the code adding the extra square brackets as you are searching for literals now.

well i am pretty much reluctant to change the sp and UDF for that

if i could somehow get around it just by some sort of replace statement etc, that would be nice :)

Ashutosh|||well i am pretty much reluctant to change the sp and UDF for that

if i could somehow get around it just by some sort of replace statement etc, that would be nice :)


AshutoshThere's another reason - sargability. LIKE '%something%' isn't an efficient search if you are looking for the exact string match rather than a portion. It appears that you are looking for one bodge to correct for another.|||There's another reason - sargability. LIKE '%something%' isn't an efficient search if you are looking for the exact string match rather than a portion. It appears that you are looking for one bodge to correct for another.

Probably i have gone the harder way

I created the function to check the same. If there is an exact match, it passes the parameter as string only else it pads the % characters into it

Ashutosh|||well i am pretty much reluctant to change the sp and UDF for that

if i could somehow get around it just by some sort of replace statement etc, that would be nice :)


AshutoshIf you aren't keen on CHARINDEX and you need to search for portions then perhaps:

WHERE REPLACE(REPLACE(FName, '%', 'Wild1'), '_', 'Wild2') LIKE REPLACE(REPLACE('`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./', '%', 'Wild1'), '_', 'Wild2')
It ain't exactly pretty though.|||If you aren't keen on CHARINDEX and you need to search for portions then perhaps:

WHERE REPLACE(REPLACE(FName, '%', 'Wild1'), '_', 'Wild2') LIKE REPLACE(REPLACE('`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./', '%', 'Wild1'), '_', 'Wild2')
It ain't exactly pretty though.

I di the same but in that case it also replaces the leading and trailing % which I have from the code itself and hense the criteria changes.

%%% --> should come as %[%]%

but it comes as [%][%][%]

Ashutosh|||This is an SP yes? So '%`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./%' is actually passed as a parameter - like @.myParam?

Handling very large XML result sets

I am writing a .NET based application to create large XML data files using
SQLXML classes and FOR XML EXPLICIT queries. What are some strategies I can
use to break up and process these large result sets? The overhead of issuing
multiple queries by breaking them up via WHERE clause filters isn’t the way I
want to go since my queries are very large and take significant time to
process within SQL server.
I am currently experiencing out of memory exceptions on some larger result
sets (~50-60 Mbytes total XML file size). My first attempt was using
SqlXmlCommand.ExecuteXmlReader and an XmlDocument via this snippet of code:
XmlReader xr = forXMLCommand.ExecuteXmlReader();
XmlDocument xd = new XmlDocument();
xd.Load(xr);
This throws a System.OutOfMemoryException on the call to ExecuteXmlReader
when the result set gets very large.
I also tried using SqlXmlCommand.ExecuteStream thinking I could read a
buffer of chars at a time to process these large result sets but this also
resulted in a System.OutOfMemoryException on the call to ExecuteStream:
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.Peek() >= 0)
{
c = new char[10000];
intnumRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
I have tried running my application on two different systems one with 1G of
main memory and the other a Win2K3 server with 8G of main memory. Both
systems seem to run out of memory at the same 50-60 Mb limit) Are there any
..NET memory settings I can tweak to give my .NET application more memory?
Thanks for your suggestions and ideas,
Scott
The XmlReader is a streaming interface which should not run out of memory
via the SqlXmlCommand.ExecuteStream method.
Loading into an XmlDocument however will cache the entire document into
memory.
Can you remove the following two lines from your repro and see if you are
still having the problem:
XmlDocument xd = new XmlDocument();
xd.Load(xr);
Thanks -
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad
|||Andrew,
That was exactly my thought as well, but ExecuteStream is throwing an
OutOfMemoryException. I am NOTcalling XmlDocument .Load in the code that
uses ExecuteStream.
Here is my full method I am using:
private void ExecuteSQLXMLCommandExecuteStream()
{
try
{
SqlXmlCommandforXMLCommand = new SqlXmlCommand("Provider=SQLOLEDB;DATA
SOURCE=Gibraltar;Initial Catalog=RDCModel;User ID=sa;Password=XXXX");
forXMLCommand.CommandType = SqlXmlCommandType.Sql;
StreamReadersr1 = new StreamReader(mQueryFileName);
stringquery = sr1.ReadToEnd();
sr1.Close();
query = query.Replace("\r\n", " ");
query = query.Replace("\t", " ");
forXMLCommand.CommandText = query;
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.Peek() >= 0)
{
c = new char[10000];
intnumRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
sw3.Close();
}
catch (SqlXmlException ex)
{
ex.ErrorStream.Position = 0;
string sqlErrorString;
sqlErrorString = new StreamReader(ex.ErrorStream).ReadToEnd();
Console.WriteLine(sqlErrorString);
RDCUtilities.WriteToLog(sqlErrorString);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
RDCUtilities.WriteToLog(ex.Message);
}
""Andrew Conrad"" wrote:

> The XmlReader is a streaming interface which should not run out of memory
> via the SqlXmlCommand.ExecuteStream method.
> Loading into an XmlDocument however will cache the entire document into
> memory.
> Can you remove the following two lines from your repro and see if you are
> still having the problem:
> XmlDocument xd = new XmlDocument();
> xd.Load(xr);
> Thanks -
> Andrew Conrad
> Microsoft Corp
> http://blogs.msdn.com/aconrad
>
|||Try using SqlXmlCommand.ExecuteToStream() instead of ExecuteStream.
Because of some technical limitations with COM interop, ExecuteStream
caches results.
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad

Handling very large XML result sets

I am writing a .NET based application to create large XML data files using
SQLXML classes and FOR XML EXPLICIT queries. What are some strategies I can
use to break up and process these large result sets? The overhead of issuin
g
multiple queries by breaking them up via WHERE clause filters isn’t the wa
y I
want to go since my queries are very large and take significant time to
process within SQL server.
I am currently experiencing out of memory exceptions on some larger result
sets (~50-60 Mbytes total XML file size). My first attempt was using
SqlXmlCommand.ExecuteXmlReader and an XmlDocument via this snippet of code:
XmlReader xr = forXMLCommand.ExecuteXmlReader();
XmlDocument xd = new XmlDocument();
xd.Load(xr);
This throws a System.OutOfMemoryException on the call to ExecuteXmlReader
when the result set gets very large.
I also tried using SqlXmlCommand.ExecuteStream thinking I could read a
buffer of chars at a time to process these large result sets but this also
resulted in a System.OutOfMemoryException on the call to ExecuteStream:
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.P() >= 0)
{
c = new char[10000];
int numRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
I have tried running my application on two different systems one with 1G of
main memory and the other a Win2K3 server with 8G of main memory. Both
systems seem to run out of memory at the same 50-60 Mb limit) Are there any
.NET memory settings I can tweak to give my .NET application more memory?
Thanks for your suggestions and ideas,
ScottThe XmlReader is a streaming interface which should not run out of memory
via the SqlXmlCommand.ExecuteStream method.
Loading into an XmlDocument however will cache the entire document into
memory.
Can you remove the following two lines from your repro and see if you are
still having the problem:
XmlDocument xd = new XmlDocument();
xd.Load(xr);
Thanks -
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad|||Andrew,
That was exactly my thought as well, but ExecuteStream is throwing an
OutOfMemoryException. I am NOTcalling XmlDocument .Load in the code that
uses ExecuteStream.
Here is my full method I am using:
private void ExecuteSQLXMLCommandExecuteStream()
{
try
{
SqlXmlCommand forXMLCommand = new SqlXmlCommand("Provider=SQLOLEDB;DATA
SOURCE=Gibraltar;Initial Catalog=RDCModel;User ID=sa;Password=XXXX");
forXMLCommand.CommandType = SqlXmlCommandType.Sql;
StreamReader sr1 = new StreamReader(mQueryFileName);
string query = sr1.ReadToEnd();
sr1.Close();
query = query.Replace("\r\n", " ");
query = query.Replace("\t", " ");
forXMLCommand.CommandText = query;
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.P() >= 0)
{
c = new char[10000];
int numRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
sw3.Close();
}
catch (SqlXmlException ex)
{
ex.ErrorStream.Position = 0;
string sqlErrorString;
sqlErrorString = new StreamReader(ex.ErrorStream).ReadToEnd();
Console.WriteLine(sqlErrorString);
RDCUtilities.WriteToLog(sqlErrorString);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
RDCUtilities.WriteToLog(ex.Message);
}
""Andrew Conrad"" wrote:

> The XmlReader is a streaming interface which should not run out of memory
> via the SqlXmlCommand.ExecuteStream method.
> Loading into an XmlDocument however will cache the entire document into
> memory.
> Can you remove the following two lines from your repro and see if you are
> still having the problem:
> XmlDocument xd = new XmlDocument();
> xd.Load(xr);
> Thanks -
> Andrew Conrad
> Microsoft Corp
> http://blogs.msdn.com/aconrad
>|||Try using SqlXmlCommand.ExecuteToStream() instead of ExecuteStream.
Because of some technical limitations with COM interop, ExecuteStream
caches results.
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad

Handling variables

I have a script task that has a ReadWriteVariable = FilePath. I am doing the following below to calculate the FilePath variable. Instead of the output being C:\Rapagator\Downloaded\RETS_Search_ResidentialProperty_20060403094343.txt, it just shows RETS_Search_ResidentialProperty_20060403094343.txt. Any ideas how to resolve this problem?

Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Imports Microsoft.SqlServer.Dts.Runtime.Wrapper

Public Class ScriptMain
Inherits UserComponent
Public GetFilePath As String = ""

Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
GetFilePath = "C:\Rapagator\Downloaded" + Row.FileName
End Sub
Public Overrides Sub PostExecute()
Variables.FilePath = GetFilePath
MyBase.PostExecute()
End Sub

End Class

How do you know the variable value is only the filename part you state? You cannot debug inside the Script Component, so perhaps add a MessageBox or log an event to write out the values you have just to be sure. Perhaps set a breakpoint as well and add the variable to the watch window. You do know that the Variables window only shows design-time values during runtime, even when completed or stopped on a breakpoint. Use the watch window to get real values. This does seem rather strange, are you only expecting one row to flow through this component; otherwise you will only store the last row's value in the variable. Is that what you want?


Does the filename start with a "\" ? If not the path looks invalid - C:\Rapagator\DownloadedMYFILE. Top tip, use System.IO.Path.Combine(x, y) as that will sort out path qualifiers for you.

Handling UNC path names

What is the best handling of a UNC Pathname?

What is the max length of a path name?
Which is the best type of field in SQL Server to use for a path name?

Thanks

I would store this in a VARCHAR field with the appropiate value.

HTH, Jens Suessmeyer,

|||

What is the max length of the path & filename? It used to be 255 chars but I believe it has grown since then.

|||

Maximum name of a UNC path is 260 unicode characters (MAX_PATH defined in windows.h I believe). Search google for MAX_PATH to see lots of people talking about this issue.

So a NVARCHAR(260) should be sufficient unless you allow the \\?\ prefix that bypasses normal MAX_PATH length. If you only have one locale you use for paths, then you can potentially use VARCHAR(260).

Handling Transaction

hi friends,

I like to put a set of sql statements under a transaction and wish the sql server to take care of commit / roll back the entire set depending upon the success/failure of the statements in the set. I want the whole set is either to success or to failure.

When I go through the docs, i find that SQL Server 2000 operates three transaction modes:
Autocommit transactions : Each individual statement is a transaction.

Explicit transactions : Each transaction is explicitly started with the BEGIN TRANSACTION statement and explicitly ended with a COMMIT or ROLLBACK statement.

Implicit transactions: A new transaction is implicitly started when the prior transaction completes, but each transaction is explicitly completed with a COMMIT or ROLLBACK statement.

As one can see, it seems that it is not possible to define "atomic compound sql statements". Even if i used explicit transaction, it is not possible to achieve this, since i couldn't find a mechanism to handle errors for a group of statements.

I wonder how to write atomic compound sql statement in the sql server.
can anybody please help me on this...

JakeThe only solution that I am aware of is enclosing the inserts/updates into one transaction and check each for success. If one fails; rollback, if all succeed; commit.|||hi Kaiowas,

thanks for the suggestion. instead of checking for success & failure for each line, i found another way of doing this.
If we put the set of statements in a BEGIN TRANSACTION ... COMMIT TRANSACTION block, the commit transaction will get execute iff all the statements succeed. Otherwise all the statements get rolled back. This solves my problem.

But still there is no way to handle exception for a block of statements.
Does Yukon release have support for this? any idea?

Jake

Originally posted by Kaiowas
The only solution that I am aware of is enclosing the inserts/updates into one transaction and check each for success. If one fails; rollback, if all succeed; commit.

Handling SQLServerCE DataBase By windows application ?

Hi

i have the following problem :

i attempted to connect with a SQLServerCE DataBase

to Insert and update its rows, but i noticed that i want the reference :

System.Data.SqlServerCe

i went to (Add references) but i didn't find it ..

what should i do to break this problem ?

please help me !

Hi Imad

I have moved your treat to the Devices team who should be able to help you

mairead

PM, TS Data

|||

Hello and sorry for the delayed reply.

Just a question first - are you attemping to open a SqlServerCe database on your desktop PC or on a mobile device (like a Pocket PC / Smartphone)? This is important because we want to add the correct DLL to your project.

If you're opening the database on the desktop, then we can find the System.Data.SqlServerCe.DLL in the directory where DEVENV.EXE is. That would be something like C:\Program Files\Microsoft Visual Studio 8\Common7\IDE

If you're opening the database on a device, then we can find the System.Data.SqlServerCe.DLL in the Mobile SDK folder. That would be something like C:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile\v3.0

If either of the DLLs are missing then it means your SQLServerCe SDK is not installed on your machine. The easiest way to fix this is to REPAIR the visual studio installation. This will re-install the SDK and ensure the DLLs are present on the machine.

Please let me know how it works out,

Kind regards,

Carlton Lane

Microsoft Visual Basic Team

|||

I am trying to use desktop application to open a Sqlserverce database that located on the PDA. but it throw this exception:

Unable to load DLL 'sqlceme30.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)

I checked and see System.Data.SqlServerCe.DLL exists under:

D:\Program Files\Microsoft Visual Studio 8\Common7\IDE

D:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile\v3.0

Actually it throw that exception no matter what database I am trying to open. here is my code:

connStr = "Data Source =""Mobile Device\Program Files\Barcode_PDA\pda.sdf"";"
conn1 = New SqlServerCe.SqlCeConnection(connStr) < this generates the error

Thanks for your help!

|||

Hi Alex,

I'm sorry but the scenairo of opening a database on a device remotely from a desktop PC isnt supported by the SQLCE engine. Eventually, you'll get an error about the connection string being invalid. This is because the engine is targeting local data scenarios - that is where the Application and Database reside on the same machine, in the same process. This scenario starts to touch on client / server scenarios which currently arent supported.

Your current error about the missing dlls is telling us that the application is starting but cant find the engine. These dlls are found next to the System.Data.SqlServerCe.dll file. The ones in the Common7\IDE are for your PC. THe ones in SmartDevices\SDK are for the device. For a PC application, copy the ones from Common7\IDE into the executing directory of your application. But again, after you get pass this error, you will eventually get an error about the connection string being invalid because this scenario isnt supported.

HTH and good luck,

Carlton

|||

hi everybody

first thank you for helping me ..

but i want to tell you that i have solved my problem easily by installing sql server everywhere edition CTP

to install sql serverce tools and then all dll's have been loaded correctly ..

Handling SQLServerCE DataBase By windows application ?

Hi

i have the following problem :

i attempted to connect with a SQLServerCE DataBase

to Insert and update its rows, but i noticed that i want the reference :

System.Data.SqlServerCe

i went to (Add references) but i didn't find it ..

what should i do to break this problem ?

please help me !

Hi Imad

I have moved your treat to the Devices team who should be able to help you

mairead

PM, TS Data

|||

Hello and sorry for the delayed reply.

Just a question first - are you attemping to open a SqlServerCe database on your desktop PC or on a mobile device (like a Pocket PC / Smartphone)? This is important because we want to add the correct DLL to your project.

If you're opening the database on the desktop, then we can find the System.Data.SqlServerCe.DLL in the directory where DEVENV.EXE is. That would be something like C:\Program Files\Microsoft Visual Studio 8\Common7\IDE

If you're opening the database on a device, then we can find the System.Data.SqlServerCe.DLL in the Mobile SDK folder. That would be something like C:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile\v3.0

If either of the DLLs are missing then it means your SQLServerCe SDK is not installed on your machine. The easiest way to fix this is to REPAIR the visual studio installation. This will re-install the SDK and ensure the DLLs are present on the machine.

Please let me know how it works out,

Kind regards,

Carlton Lane

Microsoft Visual Basic Team

|||

I am trying to use desktop application to open a Sqlserverce database that located on the PDA. but it throw this exception:

Unable to load DLL 'sqlceme30.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)

I checked and see System.Data.SqlServerCe.DLL exists under:

D:\Program Files\Microsoft Visual Studio 8\Common7\IDE

D:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile\v3.0

Actually it throw that exception no matter what database I am trying to open. here is my code:

connStr = "Data Source =""Mobile Device\Program Files\Barcode_PDA\pda.sdf"";"
conn1 = New SqlServerCe.SqlCeConnection(connStr) < this generates the error

Thanks for your help!

|||

Hi Alex,

I'm sorry but the scenairo of opening a database on a device remotely from a desktop PC isnt supported by the SQLCE engine. Eventually, you'll get an error about the connection string being invalid. This is because the engine is targeting local data scenarios - that is where the Application and Database reside on the same machine, in the same process. This scenario starts to touch on client / server scenarios which currently arent supported.

Your current error about the missing dlls is telling us that the application is starting but cant find the engine. These dlls are found next to the System.Data.SqlServerCe.dll file. The ones in the Common7\IDE are for your PC. THe ones in SmartDevices\SDK are for the device. For a PC application, copy the ones from Common7\IDE into the executing directory of your application. But again, after you get pass this error, you will eventually get an error about the connection string being invalid because this scenario isnt supported.

HTH and good luck,

Carlton

|||

hi everybody

first thank you for helping me ..

but i want to tell you that i have solved my problem easily by installing sql server everywhere edition CTP

to install sql serverce tools and then all dll's have been loaded correctly ..

Handling SQL Exception

I'm unsure how to handle an SQL Exception correctly when the database is unavailable/offline.

I have my aspx file with the C# code-behind, but all of the SQL stuff is done in a separate code file in the App_Code directory.
E.g.

CODE-BEHIND
DatabaseModifier.deleteUser(username);

DATABASEMODIFIER.cs
public static void deleteUser(string username)
{
SqlConnection conn = SqlLogin.SqlConnect;
SqlCommand command = new SqlCommand("DELETE FROM <table> WHERE Username = '" + username + "'", conn);
conn.Open()
command.ExecuteNonQuery();
conn.Close()
}

Now, that code works perfectly, however if the database I'm connecting to is offline, an SQLException is thrown and because the SQL is handled in my DatabaseModifier class, I'm not sure how to handle it correctly.
If I use a Try/Catch block in my code-behind, it doesn't get thrown because the error occurs in my DatabaseModifier class. If I use a Try/Catch block in my DatabaseModifier class, what can I put in the catch block that will inform the user of the database being offline and/or how can I perform a url redirection?

Any help is greatly appreciated.

You can write the connection open code in the database modifier class with try-catch block, and in the catch block you can just throw the exception being written. Again the the page code behind you can write the database activity in a try-catch block, and here in this catch you'll receive any exception which may be generated in the database modifier class. As the code in the database modifier class does nothing but to throw the exception, you can handle them in the page code.

Hope this will help.

|||

So in the DatabaseModifier class:
Try
{
code
}
Catch(SqlException)
{
//nothing in here
}

In code behind:
Try
{
code
}
Catch(SqlException e)
{
//output e.Message to user?
}

|||

Something like that:

In database modifier:Try{ code}Catch(SqlException exp){throw exp;}In code behind:Try{ code}Catch(SqlException e){//output e.Message to user?}

Now, you can see that the db modifier just throws any exceptions which it receives during any operation (this is fair also because you don't have any user out put system set in the db modifier class), and the page code is set up to handle the exceptions thrown by the db modifier as well as any other exceptions during the code execution of its own.

Hope this will help

handling special characters using oledb to oracle

hi,

i am using oledb to connect to oracle.
i want to know if there is a way to handle different character sets in this type of connection. for sql to sql, i have been using auto translate in the connection string.
what about for sql oledb to oracle? how can i make sure that the data from sql to oracle is transferred as is?

many thanks.

You might be better off asking this on the Data Access forum:

http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=87&SiteID=1

-Jamie

Handling Schema Changes

What is the best way to handle schema changes as database needs change.
What I need to find out is
what is the best way to write database interface code in order to be
flexible to schema changes. Would having stored procedures as an interface
mechanism be best so that old app software will still be compatible? It is
impractical for us to upgrade all apps simultaneously, so if a new column is
added to a table, old apps will not fill in this column while newer apps who
support it will.

ThanksUse stored procedures as your data access layer. Supporting multiple
applications is much easier that way because the SPs can insulate the
application from underlying schema changes. This is just one of the
benefits of using SPs.

--
David Portas
SQL Server MVP
--

Handling provider-level errors from .Net SqlClient Data Provider

Hi,

After reading this helpful blog entry:

http://blogs.msdn.com/sql_protocols/archive/2007/05/13/sql-network-interfaces-error-26-error-locating-server-instance-specified.aspx

I think this might be a good place to ask the following question.

I am writing the error handling code for my data access layer for a web application. I am using the Enterprise Library Data Access Application Block. Although this supports generic database connections, I realized that I need to handle errors specific to each database type. Microsoft SQL is the only database type I am using for now, so I am using a try...catch (SqlException e).

In testing my code, I intentionally changed the instance name in web.config to a name that does not exist. I get the very popular error 26 - Error Locating Server/Instance Specified. This is returned as a SqlException, but the SqlError.Number property is set to -1.

    Am I getting "-1" because the provider hasn't actually connected to SQL yet, so it doesn't have an actual SQL error number? Can I assume that (SqlError.Number == -1) is always a fatal, provider-level connection exception? Will the provider ever use another SqlError.Number of its own? Or do all numbers besides -1 come from the SQL sysmessages table?. Is there a comprehensive list of what exceptions might be raised by the SqlClient provider, including #26?

The reason for all the questions is that in a web application, I want to prevent the end-user from seeing the "real" exception if it has to do with configuration errors. However, maybe there are other errors that the user should see and handle? It's hard to know without a full list of SqlClient provider errors, along with the SqlError.Number that each error maps to.

Thanks and regards,

Mark

I can only answer 1. SqlError.Number corresponds to the SQL Server message id from sys.messages table on the server, so if it is not -1 it indicates an error message coming from the server.

I believe it will be impossible to get an exhaustive list of errors numbers for each error on the server. There will always be newer error messages with new SQL Servers and they number in thousands, check out the sys.messages table on SQL Server 2005.

Thanks

Waseem

|||

Waseem Basheer - MSFT wrote:

I believe it will be impossible to get an exhaustive list of errors numbers for each error on the server. There will always be newer error messages with new SQL Servers and they number in thousands, check out the sys.messages table on SQL Server 2005.

Thanks Waseem. I'm familiar with sysmessages. For messages coming directly from SQL server, sysmessages is all I need.

I am looking for a comprehensive list of SqlClient messages that are NOT in sysmessages, i.e. the errors that are raised directly by the .NET SqlClient provider. Hopefully this is a much more limited number of messages.

Regards,

Mark

|||

Hi, mcbsys

Your question is more specific to .NET Data access, so, I recommend you post your question to the forum where many experts can help you quickly.

http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=45&SiteID=1

HTH

Ming.

|||

MING LV wrote:

Your question is more specific to .NET Data access, so, I recommend you post your question to the forum where many experts can help you quickly.

http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=45&SiteID=1

Good idea, I will try that.

Thanks,

Mark

handling parameters that could be missing

I have a search app that I'm developing in ASP.NET
It has 5 parameters
MRN, Name, Sex, DOB, SSN
All or any combination can be used.
what is the best way to handle the non-supplied parameters?
i.e. if the user selects only supplies 'Name' and 'MRN'
I want the SQL for the stored procedure to read as if it were just 2
parameters:
(but have one stored procedure that has parameter holders for all 5)
Select MRN, Name, Sex, DOB, SSN
From Membership
WHERE MRN = 123456 AND Name like 'Johns%'
The way I have been handling this is to rewrite a different SQL statement
for every possible situation, and pass that string to the adapter. But I ca
n
see that I'm not taking advantage of the processing that SQL server could do
in a stored procedure, and for 1.4 Million records, my app is just timing ou
t
all the time.
This must be a common issue. Could someone show me the ideal way to handle
this?Have a look at
http://www.sommarskog.se/dyn-search.html
Roji. P. Thomas
Net Asset Management
http://toponewithties.blogspot.com
"jonefer" <jonefer@.discussions.microsoft.com> wrote in message
news:0FFF3A3A-AB66-4A96-B63E-73E937F7A164@.microsoft.com...
>I have a search app that I'm developing in ASP.NET
> It has 5 parameters
> MRN, Name, Sex, DOB, SSN
> All or any combination can be used.
> what is the best way to handle the non-supplied parameters?
> i.e. if the user selects only supplies 'Name' and 'MRN'
> I want the SQL for the stored procedure to read as if it were just 2
> parameters:
> (but have one stored procedure that has parameter holders for all 5)
> Select MRN, Name, Sex, DOB, SSN
> From Membership
> WHERE MRN = 123456 AND Name like 'Johns%'
> The way I have been handling this is to rewrite a different SQL statement
> for every possible situation, and pass that string to the adapter. But I
> can
> see that I'm not taking advantage of the processing that SQL server could
> do
> in a stored procedure, and for 1.4 Million records, my app is just timing
> out
> all the time.
> This must be a common issue. Could someone show me the ideal way to
> handle
> this?
>|||"jonefer" wrote:
> I have a search app that I'm developing in ASP.NET
> It has 5 parameters
> MRN, Name, Sex, DOB, SSN
> All or any combination can be used.
> what is the best way to handle the non-supplied parameters?
> i.e. if the user selects only supplies 'Name' and 'MRN'
> I want the SQL for the stored procedure to read as if it were just 2
> parameters:
> (but have one stored procedure that has parameter holders for all 5)
> Select MRN, Name, Sex, DOB, SSN
> From Membership
> WHERE MRN = 123456 AND Name like 'Johns%'
> The way I have been handling this is to rewrite a different SQL statement
> for every possible situation, and pass that string to the adapter. But I
can
> see that I'm not taking advantage of the processing that SQL server could
do
> in a stored procedure, and for 1.4 Million records, my app is just timing
out
> all the time.
> This must be a common issue. Could someone show me the ideal way to handl
e
> this?
Have you tried something along the line of:
create procedure p_MyProc
@.pMyParm1 int = NULL, @.pMyParm2 datetime = NULL,
@.pMyParm3 varchar(128) = NULL, @.pMyParm4 bit = NULL
as
if @.pMyParm1 IS NULL do something ...
if @.pMyParm2 IS NULL do something ...
if @.pMyParm3 IS NULL do something ...
if @.pMyParm4 IS NULL do something ...|||Thank you. That was excellent.
"Roji. P. Thomas" wrote:

> Have a look at
> http://www.sommarskog.se/dyn-search.html
> --
> Roji. P. Thomas
> Net Asset Management
> http://toponewithties.blogspot.com
>
> "jonefer" <jonefer@.discussions.microsoft.com> wrote in message
> news:0FFF3A3A-AB66-4A96-B63E-73E937F7A164@.microsoft.com...
>
>|||I implemented my search with the 5 parameters using your Dynamic SQL example
.
But because of the way I want to present the final results... it always
times out.
So I will list what I'm trying to get in my final results as well as what
the Stored procedure looks like (perhaps you can help me tweak it)
What I want in the final selection:
SELECT DISTINCT MRN, MemNAME AS Member, DOB, SEX, SSN, GROUP, SGR,
[FROM-DT], [THRU-DT]
FROM qMembershipSelect AS Y
GROUP BY MRN, MemNAME, DOB, SEX, SSN, GROUP, SGR, [FROM-DT], [THRU-DT]
HAVING [THRU-DT] Is Null Or [THRU-DT] In (Select Max(X.[Thru-Dt]) From
qMembershipSelect As X Where X.MRN = Y.MRN And X.MRN Not in(Select Z.MRN
From qMembershipSelect As Z Where Z.[THRU-DT] Is Null))
ORDER BY MemNAME, [FROM-DT] DESC;
How I implemented this in the Stored Procedure:
CREATE PROCEDURE qMemberSelect
@.MRN int = NULL,
@.MemNAME nvarchar(40) = NULL,
@.DOB datetime = NULL,
@.SSN nvarchar(9) = NULL,
@.SEX nvarchar(1) = NULL,
@.debug bit = 0 AS
DECLARE @.sql nvarchar(4000),
@.paramlist nvarchar(4000)
SELECT @.sql =
'SELECT
[MRN],[MemName],[DOB],[SEX],[SSN],[GROUP
],[SGR],[FROM-DT],[THRU-DT] FROM
MEMBERSHIP AS Y
WHERE 1=1'
if @.MRN IS NOT NULL
SELECT @.sql = @.sql + ' AND MRN = @.xMRN'
if @.MemNAME IS NOT NULL
SELECT @.sql = @.sql + ' AND MemName like @.xMemNAME + ''%'''
if @.DOB IS NOT NULL
SELECT @.sql = @.sql + ' AND DOB = @.xDOB'
if @.SSN IS NOT NULL
SELECT @.sql = @.sql + ' AND SSN = @.xSSN'
if @.SEX IS NOT NULL
SELECT @.sql = @.sql + ' AND SEX = @.xSEX'
if @.debug = 1
PRINT @.sql
SELECT @.sql = @.sql + ' GROUP BY MRN, MemNAME, DOB, SEX, SSN, [GROUP], SGR,
[FROM-DT], [THRU-DT]
HAVING len([THRU-DT])= 0
Or [THRU-DT] In (Select Max(X.[Thru-Dt]) From
Membership As X Where X.MRN = Y.MRN And X.MRN
Not in(Select Z.MRN From Membership As Z Where len(Z.[THRU-DT])=0))
ORDER BY MemNAME, [FROM-DT] DESC'
SELECT @.paramlist = '@.xMRN int,
@.xMemName nvarchar(40),
@.xDOB datetime,
@.xSSN nvarchar(9),
@.xSEX nvarchar(1)'
EXEC sp_executesql @.sql, @.paramlist,
@.MRN, @.MemNAME, @.DOB, @.SSN, @.SEX
But there must be a way to do this in two phases because trying it all in
one pull is just not efficient.
"Roji. P. Thomas" wrote:

> Have a look at
> http://www.sommarskog.se/dyn-search.html
> --
> Roji. P. Thomas
> Net Asset Management
> http://toponewithties.blogspot.com
>
> "jonefer" <jonefer@.discussions.microsoft.com> wrote in message
> news:0FFF3A3A-AB66-4A96-B63E-73E937F7A164@.microsoft.com...
>
>|||How about...
SELECT MRN, Name, Sex, DOB, SSN
FROM dbo.Membership
WHERE (MRN = @.MRN or @.MRN is null)
AND (Name like @.Name or @.Name is null)
AND (Sex = @.Sex or @.Sex is null)
etc.
TIP: If there are not user specific tables, always specify the owner of the
object so that it is not ambiguous and SQL Server does not have to check for
<userid>.Membership and if not found, check for dbo.Membership. This applie
s
for referencing tables, views, and stored procedures. Also, never prefix a
stored procedure name with sp_ or xp_ as that can cause unexpected side
effects.
Just my two cents,
Joe
"jonefer" wrote:

> I have a search app that I'm developing in ASP.NET
> It has 5 parameters
> MRN, Name, Sex, DOB, SSN
> All or any combination can be used.
> what is the best way to handle the non-supplied parameters?
> i.e. if the user selects only supplies 'Name' and 'MRN'
> I want the SQL for the stored procedure to read as if it were just 2
> parameters:
> (but have one stored procedure that has parameter holders for all 5)
> Select MRN, Name, Sex, DOB, SSN
> From Membership
> WHERE MRN = 123456 AND Name like 'Johns%'
> The way I have been handling this is to rewrite a different SQL statement
> for every possible situation, and pass that string to the adapter. But I
can
> see that I'm not taking advantage of the processing that SQL server could
do
> in a stored procedure, and for 1.4 Million records, my app is just timing
out
> all the time.
> This must be a common issue. Could someone show me the ideal way to handl
e
> this?
>|||Joe
If you use sp_XXXXXX SQL server looks into MAster database first even if you
mentioned the database.sp_XXX... So, we are looking unnecessarily in master
database . that is why most folks suggest do not use sp_XXXX .I never heard
of side effects... If you know any please share with us.
--
SQL SERVER DBA
"Joe from WI" wrote:
> How about...
> SELECT MRN, Name, Sex, DOB, SSN
> FROM dbo.Membership
> WHERE (MRN = @.MRN or @.MRN is null)
> AND (Name like @.Name or @.Name is null)
> AND (Sex = @.Sex or @.Sex is null)
> etc.
> TIP: If there are not user specific tables, always specify the owner of th
e
> object so that it is not ambiguous and SQL Server does not have to check f
or
> <userid>.Membership and if not found, check for dbo.Membership. This appl
ies
> for referencing tables, views, and stored procedures. Also, never prefix
a
> stored procedure name with sp_ or xp_ as that can cause unexpected side
> effects.
> Just my two cents,
> Joe
> "jonefer" wrote:
>

Handling out-dated transaction records

For performance issue, I believe many program should have a house-keeping procedure to clean up transaction history. Is there any best practice to perform this? Or should it be done simply by moving transaction data from the transaction table into a history table? Any better or consideration that I should be concerned of?Depends on your environment. Many places would like to keep records for atleast 2 years, but if you are in the medical field and some of your stuff may fall under HIPAA, then you need to keep the records I believe for like 7+ years. In those cases, I usually have an audit table that contains all the transaction data (It's written to via a trigger). Then I keep the transaction table fairly clean (only recent/open/pending), and if you need historical data, then I run my queries off the audit table instead which is never purged and has a good set of indexes on it.

Handling optional parameters

On many reports I have optional parameters, for example if the employee is filled in it will run for that employee but if it's null it runs for all employees. Which is the best way to handle this?

The two options I am currently looking at are:

select * from employee where (employee.id = @.EmpID or @.EmpID is Null)

select * from employee where isnull(@.empID, employee.id) = employee.id

Anyone else have a different solution?

Currently we use the OR approach and I wanted to see if anyone had any thoughts before switching to using IsNull, which seems a bit better.

OR can be slow. I always use the ISNULL approach.

|||

Another way to do this is, you can include "*(ALL) " in you employee dataset and you can default the parameter to "*(ALL)". Now pass back the selected value to the Stored Procedure

IF (@.EMPID= '*(ALL)'

select * from Employee

Else

Select * form Employee where employee.id=@.EMPID

This way it is more intuitive to End Users.

|||

The problem with using the if (@.EmpID is null) is that then you need to maintain two queries. And for some reports that have 3 or more optional parameters this could mean having 8 or more queries to maintain just for optional parameters. We also thought about using an expression for the query but some of the clients like to use the generic query view if they edit the report so this makes that option unavailable.

I think we will be using the isnull option for our solution.

Thanks

|||

I thought this blog post was useful for this topic. You can combine approaches if you want.

http://bloggingabout.net/blogs/egiardina/archive/2007/06/26/sql-server-reporting-services-optional-parameters.aspx

|||

Techquest wrote:

Another way to do this is, you can include "*(ALL) " in you employee dataset and you can default the parameter to "*(ALL)". Now pass back the selected value to the Stored Procedure

IF (@.EMPID= '*(ALL)'

select * from Employee

Else

Select * form Employee where employee.id=@.EMPID

This way it is more intuitive to End Users.

Not everyone uses stored procs to serve reports.

|||

Found an issue using the IsNull approach. If the optional parameter can be null in the table it will not return the null values.

EX. Middle name is a nullable field

select * from Employee where MiddleName = IsNull(@.MiddleName, MiddleName)

This won't return all values when the parameter is null, will return all values that have any non null MiddleName.

Think it may be safer to use the OR approach now because other people that use modify reports not familiar with this may use IsNull is cases it should not be used.

Anyone have any suggestions around this? I know most of the time the optional parameter does not go against a nullable field but could cause a lot of problems if you did it by mistake.

Thanks,

Kevin

|||

Yeah, avoid nullable fields Smile

In case you do have them though you could use:

select * from Employee where COALESCE(MiddleName, '') = COALESCE(@.MiddleName, MiddleName, '')

COALESCE is basibcally like ISNULL but it takes any number of parameters rather than just 2 and returns the first non-null value.

|||

COALESCE functions differently than IsNull though. If you look at the execution plan, these both are the same:

select * from employee where employee.id = COALESCE(@.EmpID, employee.id)

select * from employee where (employee.id = @.EmpID or @.EmpID is Null)

This defeats the purpose of using IsNull for the better exectution with some indexes.

Handling of xml data within Oracle 9/10 and sql Server 2005

Hi there,
I would like to know the best way to handle Xml data stored on Oracle and
Sql Server 2005 using XQuery/XPath and AdoNet. Ideally, the C# code should
not be different for Oracle and Sql Server. Any hint ?
Regards
Sql Server 2005/2000 has a OPENXML statement.
When I used Oracle 9 (while back), I was very disappointed in its xml
capabilities.
First question:
Are you pushing xml into the db, or reading it out?
It looks like reading it out, but wanted to make sure.
"Oriane" <oriane@.guermantes.fr> wrote in message
news:BD4ABE46-921B-4C70-92E4-F949528428AC@.microsoft.com...
> Hi there,
> I would like to know the best way to handle Xml data stored on Oracle and
> Sql Server 2005 using XQuery/XPath and AdoNet. Ideally, the C# code should
> not be different for Oracle and Sql Server. Any hint ?
> Regards
|||Hi Sloan,
"sloan" <sloan@.ipass.net> a crit dans le message de
news:uZvmoiYuHHA.536@.TK2MSFTNGP06.phx.gbl...
> Sql Server 2005/2000 has a OPENXML statement.
> When I used Oracle 9 (while back), I was very disappointed in its xml
> capabilities.
And what about Oracle 10g ?
> First question:
> Are you pushing xml into the db, or reading it out?
Reading and writing, but mostly reading...
> It looks like reading it out, but wanted to make sure.
|||Oracle 10? No idea.
Here is what I found.
I now remember how much I hated Oracle XML.
Good luck. I have no more I can offer beyond this.
/* Actual Logic of This procedure */
--this is just a check to make sure it can be cast as a XMLTYPE document
SELECT sys.xmltype.createxml(in_errorlogxml) INTO xmlvar FROM dual;
convertedBlobToXMLType := XMLTYPE(in_errorlogxml);
SELECT SEQ_ErrorLogID.NEXTVAL INTO ErrorID FROM DUAL;
--There is an issue with referring to the XML directly (as a cast clob
object)
--This is a workaround ... by putting the value into a temp database
--and then referring to that value, it will work.
delete from XMLTempHolderTable;
commit;
insert into XMLTempHolderTable values
(ErrorID,convertedBlobToXMLType);--in_errorlogxml);
commit;
INSERT INTO ERRORLOG (
ErrorID,
MachineName,
TimeStampValue,
FullName,
AppDomainName,
ThreadIdentity,
WindowsIdentity,
ExceptionType,
Message,
TargetSite,
Source,
StackTrace,
EntryDateTime
)
SELECT ErrorID,
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.MachineName'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.TimeStamp'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.FullName'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.AppDomainName'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.ThreadIdentity'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.WindowsIdentity'),
extractValue(value(d),'/ExceptionInformation/Exception/@.ExceptionType'),
extractValue(value(d),'/ExceptionInformation/Exception/@.Message'),
extractValue(value(d),'/ExceptionInformation/Exception/@.TargetSite'),
extractValue(value(d),'/ExceptionInformation/Exception/@.Source'),
extractValue(value(d),'/ExceptionInformation/Exception/StackTrace'),
sysdate
--FROM table (xmlsequence(extract(XMLTYPE.createXML(in_errorlog xml),
'/ExceptionInformation'))) d; --Does not work
--FROM XMLTempHolderTable tmp,table
(xmlsequence(extract(xmltype(tmp.XMLValue), '/ExceptionInformation')))
; --if the XMLValue is a clob
FROM XMLTempHolderTable tmp,
table (xmlsequence(extract((tmp.XMLValue),
'/ExceptionInformation'))) d--; --if the XMLValue is a XMLType
WHERE tmp.XMLID = ErrorID;
--Here's the deal. The second and third "FROM" is reading the value from
an intermediate table
--and it works
--the first FROM is trying to read the variable outright, and it fails
?
--For some reason, the code cannot refer to the cast clob (as xmltype)
directly
--but if one puts it into an intermediate table, and then read it, it
works?
COMMIT;
++++++++++++++++++++++++++=
DROP TABLE ERRORLOG
/
CREATE TABLE ERRORLOG (
/*
The below table definition maps to the information being provided by the
Microsoft.ApplicationBlocks.ExceptionManagement XMLPublisher
Here is a sample xml document.
<ExceptionInformation>
<AdditionalInformationProperty
ExceptionManager.MachineName="CelineDionXP1"
ExceptionManager.TimeStamp="11/8/2002 1:13:48 PM"
ExceptionManager.FullName="Microsoft.ApplicationBl ocks.ExceptionManagement,
Version=1.0.1769.18782, Culture=neutral, PublicKeyToken=null"
ExceptionManager.AppDomainName="ExceptionManagemen tQuickStartSamples.exe"
ExceptionManager.ThreadIdentity=""
ExceptionManager.WindowsIdentity="jean claude van damme" />
<Exception ExceptionType="System.DivideByZeroException"
Message="Attempted to divide by zero."
TargetSite="Void btnLogon_Click(System.Object, System.EventArgs)"
Source="ExceptionManagementQuickStartSamples">
<StackTrace> at
ExceptionManagementQuickStartSamples.Form1.btnLogo n_Click(Object sender,
EventArgs e) in c:\program files\microsoft application blocks for
..net\exception
management\code\cs\exceptionmanagementquickstartsa mples\form1.cs:line
171</StackTrace>
</Exception>
</ExceptionInformation>
*/
ErrorID int not null primary key ,
MachineName varchar2(128) null ,
TimeStampValue varchar2(64) null ,
FullName varchar2(128) null ,
AppDomainName varchar2(128) null ,
ThreadIdentity varchar2(128) null ,
WindowsIdentity varchar2(128) null ,
StackTrace varchar2(4000) null ,
ExceptionType varchar2(128) null ,
Message varchar2(640) not null ,
TargetSite varchar2(128) null ,
Source varchar2(128) null ,
EntryDateTime date default sysdate not null
)
/
DROP SEQUENCE SEQ_ErrorLog
/
CREATE SEQUENCE SEQ_ErrorLog
start with 1
increment by 1
nomaxvalue
/
DROP TABLE XMLTempHolderTable
/
CREATE TABLE XMLTempHolderTable (
/*
There is an issue reading a clob as an XMLTYPE directly
This is a temporary workaround, this table should never have more than 1
record in it
and is just a working table.
*/
XMLID int not null primary key ,
XMLValue XMLTYPE --CLOB
)
/
COMMIT
/
|||"sloan" <sloan@.ipass.net> a crit dans le message de
news:eadkgEeuHHA.3368@.TK2MSFTNGP02.phx.gbl...
> Oracle 10? No idea.
>
> Here is what I found.
>
> I now remember how much I hated Oracle XML.
> Good luck. I have no more I can offer beyond this.
Ok thanks

Handling of xml data within Oracle 9/10 and sql Server 2005

Hi there,
I would like to know the best way to handle Xml data stored on Oracle and
Sql Server 2005 using XQuery/XPath and AdoNet. Ideally, the C# code should
not be different for Oracle and Sql Server. Any hint ?
RegardsSql Server 2005/2000 has a OPENXML statement.
When I used Oracle 9 (while back), I was very disappointed in its xml
capabilities.
First question:
Are you pushing xml into the db, or reading it out?
It looks like reading it out, but wanted to make sure.
"Oriane" <oriane@.guermantes.fr> wrote in message
news:BD4ABE46-921B-4C70-92E4-F949528428AC@.microsoft.com...
> Hi there,
> I would like to know the best way to handle Xml data stored on Oracle and
> Sql Server 2005 using XQuery/XPath and AdoNet. Ideally, the C# code should
> not be different for Oracle and Sql Server. Any hint ?
> Regards|||Hi Sloan,
"sloan" <sloan@.ipass.net> a crit dans le message de
news:uZvmoiYuHHA.536@.TK2MSFTNGP06.phx.gbl...
> Sql Server 2005/2000 has a OPENXML statement.
> When I used Oracle 9 (while back), I was very disappointed in its xml
> capabilities.
And what about Oracle 10g ?
> First question:
> Are you pushing xml into the db, or reading it out?
Reading and writing, but mostly reading...
> It looks like reading it out, but wanted to make sure.|||Oracle 10? No idea.
Here is what I found.
I now remember how much I hated Oracle XML.
Good luck. I have no more I can offer beyond this.
/* Actual Logic of This procedure */
--this is just a check to make sure it can be cast as a XMLTYPE document
SELECT sys.xmltype.createxml(in_errorlogxml) INTO xmlvar FROM dual;
convertedBlobToXMLType := XMLTYPE(in_errorlogxml);
--
SELECT SEQ_ErrorLogID.NEXTVAL INTO ErrorID FROM DUAL;
--There is an issue with referring to the XML directly (as a cast clob
object)
--This is a workaround ... by putting the value into a temp database
--and then referring to that value, it will work.
delete from XMLTempHolderTable;
commit;
insert into XMLTempHolderTable values
(ErrorID,convertedBlobToXMLType);--in_errorlogxml);
commit;
INSERT INTO ERRORLOG (
ErrorID,
MachineName,
TimeStampValue,
FullName,
AppDomainName,
ThreadIdentity,
WindowsIdentity,
ExceptionType,
Message,
TargetSite,
Source,
StackTrace,
EntryDateTime
)
SELECT ErrorID,
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.MachineName'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.TimeStamp'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.FullName'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.AppDomainName'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.ThreadIdentity'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.WindowsIdentity'),
extractValue(value(d),'/ExceptionInformation/Exception/@.ExceptionType'),
extractValue(value(d),'/ExceptionInformation/Exception/@.Message'),
extractValue(value(d),'/ExceptionInformation/Exception/@.TargetSite'),
extractValue(value(d),'/ExceptionInformation/Exception/@.Source'),
extractValue(value(d),'/ExceptionInformation/Exception/StackTrace'),
sysdate
--FROM table (xmlsequence(extract(XMLTYPE.createXML(in_errorlogxml),
'/ExceptionInformation'))) d; --Does not work
--FROM XMLTempHolderTable tmp,table
(xmlsequence(extract(xmltype(tmp.XMLValue), '/ExceptionInformation')))
; --if the XMLValue is a clob
FROM XMLTempHolderTable tmp,
table (xmlsequence(extract((tmp.XMLValue),
'/ExceptionInformation'))) d--; --if the XMLValue is a XMLType
WHERE tmp.XMLID = ErrorID;
--Here's the deal. The second and third "FROM" is reading the value from
an intermediate table
--and it works
--the first FROM is trying to read the variable outright, and it fails
'
--For some reason, the code cannot refer to the cast clob (as xmltype)
directly
--but if one puts it into an intermediate table, and then read it, it
works'
COMMIT;

++++++++++++++++++++++++++=
DROP TABLE ERRORLOG
/
CREATE TABLE ERRORLOG (
/*
The below table definition maps to the information being provided by the
Microsoft.ApplicationBlocks.ExceptionManagement XMLPublisher
Here is a sample xml document.
<ExceptionInformation>
<AdditionalInformationProperty
ExceptionManager.MachineName="CelineDionXP1"
ExceptionManager.TimeStamp="11/8/2002 1:13:48 PM"
ExceptionManager.FullName="Microsoft.ApplicationBlocks.ExceptionManagement,
Version=1.0.1769.18782, Culture=neutral, PublicKeyToken=null"
ExceptionManager.AppDomainName="ExceptionManagementQuickStartSamples.exe"
ExceptionManager.ThreadIdentity=""
ExceptionManager.WindowsIdentity="jean claude van damme" />
<Exception ExceptionType="System.DivideByZeroException"
Message="Attempted to divide by zero."
TargetSite="Void btnLogon_Click(System.Object, System.EventArgs)"
Source="ExceptionManagementQuickStartSamples">
<StackTrace> at
ExceptionManagementQuickStartSamples.Form1.btnLogon_Click(Object sender,
EventArgs e) in c:\program files\microsoft application blocks for
.net\exception
management\code\cs\exceptionmanagementqu
ickstartsamples\form1.cs:line
171</StackTrace>
</Exception>
</ExceptionInformation>
*/
ErrorID int not null primary key ,
MachineName varchar2(128) null ,
TimeStampValue varchar2(64) null ,
FullName varchar2(128) null ,
AppDomainName varchar2(128) null ,
ThreadIdentity varchar2(128) null ,
WindowsIdentity varchar2(128) null ,
StackTrace varchar2(4000) null ,
ExceptionType varchar2(128) null ,
Message varchar2(640) not null ,
TargetSite varchar2(128) null ,
Source varchar2(128) null ,
EntryDateTime date default sysdate not null
)
/
DROP SEQUENCE SEQ_ErrorLog
/
CREATE SEQUENCE SEQ_ErrorLog
start with 1
increment by 1
nomaxvalue
/
DROP TABLE XMLTempHolderTable
/
CREATE TABLE XMLTempHolderTable (
/*
There is an issue reading a clob as an XMLTYPE directly
This is a temporary workaround, this table should never have more than 1
record in it
and is just a working table.
*/
XMLID int not null primary key ,
XMLValue XMLTYPE --CLOB
)
/
COMMIT
/|||"sloan" <sloan@.ipass.net> a crit dans le message de
news:eadkgEeuHHA.3368@.TK2MSFTNGP02.phx.gbl...
> Oracle 10? No idea.
>
> Here is what I found.
>
> I now remember how much I hated Oracle XML.
> Good luck. I have no more I can offer beyond this.
Ok thanks

handling nulls with inserts

Hi,

I've got a program that has a form. ?On the form there are several fields which are optional that I would prefer to remain as NULL in the database (rather than a default value) if the user doesn't fill them out. ?I'm using stored procedures for my inserts (sql 2000 + C#). ?

How do most people handle these situations? ?Right now I have a seperate function which receives the parameter values as params and then handles creating the parameters and executing the stored procedure. ?I'm not sure how to handle passing that I want one of those to be null, at least not without having multiple functions and possibly various stored procedures for different possibilities.

Thanks.

I would recommend not using stored procedures if this is what you want to accomplish. Like you said you would have to create multiple stored procedures for each variation. Instead create the insert in your code. I use a string builder to build an SQL statement. Within the logic of your system you can put if then statements to determine what the user inputed and therefore how to build the SQL and correlating params.

Have you tried to pass the value DBNull.Value to the stored procedure? Not sure that this would work, but it might be worth a try if you want to keep your stored procedures.

Hope that helps.

Handling nulls in matrix sub totals

I have a matrix where I am using the inscope function to return different
calculations depending on which group it is in (ie the detail is just the
value and the sub totals are sum(values)).
In for the subtotal group I have used:
iif(isnothing(sum(Fields!CurrentCount.Value)), 0,
sum(Fields!CurrentCount.Value)
But this is still returning blanks where there are no values in the group to
sum together.
Can anyone suggest how to get around this issue as it ?
TIA
Andrewyou might try iif(sum(Fields!CurrentCount.Value = nothing)
"Andrew Murphy" <AndrewMurphy@.discussions.microsoft.com> wrote in message
news:56B0AB62-6CDE-4F41-BF39-AC72FABAC2C0@.microsoft.com...
>I have a matrix where I am using the inscope function to return different
> calculations depending on which group it is in (ie the detail is just the
> value and the sub totals are sum(values)).
> In for the subtotal group I have used:
> iif(isnothing(sum(Fields!CurrentCount.Value)), 0,
> sum(Fields!CurrentCount.Value)
> But this is still returning blanks where there are no values in the group
> to
> sum together.
> Can anyone suggest how to get around this issue as it ?
> TIA
> Andrew
>|||also if this formula you put on here looks just like the one you have in the
expression then you need to add a perenthesis onto the end. I tried to
recreate your problem and I get a 0. Weird
"Andrew Murphy" <AndrewMurphy@.discussions.microsoft.com> wrote in message
news:56B0AB62-6CDE-4F41-BF39-AC72FABAC2C0@.microsoft.com...
>I have a matrix where I am using the inscope function to return different
> calculations depending on which group it is in (ie the detail is just the
> value and the sub totals are sum(values)).
> In for the subtotal group I have used:
> iif(isnothing(sum(Fields!CurrentCount.Value)), 0,
> sum(Fields!CurrentCount.Value)
> But this is still returning blanks where there are no values in the group
> to
> sum together.
> Can anyone suggest how to get around this issue as it ?
> TIA
> Andrew
>|||and aslo maybe you should try to set the null value to 0 before you sum it.
"Andrew Murphy" <AndrewMurphy@.discussions.microsoft.com> wrote in message
news:56B0AB62-6CDE-4F41-BF39-AC72FABAC2C0@.microsoft.com...
>I have a matrix where I am using the inscope function to return different
> calculations depending on which group it is in (ie the detail is just the
> value and the sub totals are sum(values)).
> In for the subtotal group I have used:
> iif(isnothing(sum(Fields!CurrentCount.Value)), 0,
> sum(Fields!CurrentCount.Value)
> But this is still returning blanks where there are no values in the group
> to
> sum together.
> Can anyone suggest how to get around this issue as it ?
> TIA
> Andrew
>

Handling nulls in calculated rport items

Hello,
How do I block null values from displaying "#Error" when I format a
datetime.
dbuchananWould an IIF statement not work in the expression
for example:
=IIF (Fields!Date.Value = "NULL", "",Fields!Date.Value)
This would bring up a blank space (or whatever you want it the output to be)
if the value is NULL, otherwise it just brings out the value.
Im fairly new to all of this so its just a thought.
"dbuchanan" wrote:
> Hello,
> How do I block null values from displaying "#Error" when I format a
> datetime.
> dbuchanan
>|||Theoretically yes that would work but in practice it doesn't as you have to
write
=iif(Fields!Date.Value is system.dbnull.Value, "", Fields!Date.Value)
otherwise deal with it in the initial query.
Greg
"Robert Seal" <RobertSeal@.discussions.microsoft.com> wrote in message
news:AB5060A6-88AE-4FB4-A224-C1C548418EFB@.microsoft.com...
> Would an IIF statement not work in the expression
> for example:
> =IIF (Fields!Date.Value = "NULL", "",Fields!Date.Value)
> This would bring up a blank space (or whatever you want it the output to
> be)
> if the value is NULL, otherwise it just brings out the value.
> Im fairly new to all of this so its just a thought.
> "dbuchanan" wrote:
>> Hello,
>> How do I block null values from displaying "#Error" when I format a
>> datetime.
>> dbuchanan
>>

Handling NULLS - help!

I would like to display some text in a textbox conditional on the null/not
null value of a database field (called "ID").
I have tried various versions of the following:
=iif(len(trim(Fields!ID.Value))<1 OR (Fields!ID.Value IS
system.dbnull.value),"ID is null","ID is not null")
but I still get this error:
"The query returned no rows for the data set. The expression therefore
evaluates to null."
any suggestions?Try =IIF(IsNothing(Fields!ID.Value), ...|||Thanks Rose - that worked perfectly :)

Handling null in a result

-
--if the following is null, then do not print
substring (m.gldebitacct, 1, 6)+ '.' + w.wo2 + '.' + w.wolo5 + '.' + substring (m.gldebitacct, 8, 7) + '.' +
w.wonum as PROJ,
-

How do I ignore the row that contains a null and move on to the next record?

hanks,

daveYou can use IS NULL in the predicate to compare a column to NULL. And there is an ISNULL() expression that you can use to return something else than NULL, for example in a select clause... Best regards Michael <kscdavefl@.discussions..microsoft.com> wrote in message news:2ce801e7-3912-4038-b5c4-5c3adb726674@.discussions.microsoft.com...---if the following is null, then do not printsubstring (m.gldebitacct, 1, 6)+ '.' + w.wo2 + '.' + w.wolo5 + '.' + substring (m.gldebitacct, 8, 7) + '.' +w.wonum as PROJ,-How do I ignore the row that contains a null and move on to the next record?hanks,dave|||AND this to the WHERE clause:

... AND substring (m.gldebitacct, 1, 6)+ '.' + w.wo2 + '.' + w.wolo5 + '.' + substring (m.gldebitacct, 8, 7) + '.' +
w.wonum IS NOT NULL

I'd also remove the SUBSTRING function.

alternatively you could do this:

... AND (m.gldebitacct IS NOT NULL AND w.wo2 IS NOT NULL AND w.wolo5 IS NOT NULL AND m.gldebitacct IS NOT NULL)

Any member of your concatenated string that has a NULL value will cause the entire concatenated string to be NULL.

Handling Null fields with three-tier architecture

I using Visual Web Designer Express (with Visual Basic), with a SQL Server 2000 database. I have a prototype application running satisfactorily using code that builds queries, but I'm now trying to rebuild the application "properly" using three-tier architecture. I have been following the principles of Scott Mitchell's tutorials. I have created an database .xsd with a table adaptor, and replaced the rather grotty query-building code in the business layer with better code referencing the table adaptor. Thus where the first version had code: -

Dim queryString As String = "SELECT * FROM NZGDB_User WHERE USRid = '" & Userid & "'"
Dim message As String = ""
Dim Found As Boolean = False
Try
Using connection As New SqlConnection(GDB_AppSettings.Connection)
Dim command As New SqlCommand(queryString, connection)
connection.Open()

Dim reader As SqlDataReader = command.ExecuteReader()

If reader.Read() Then
Found = True
_USRid = reader("USRid")
_USRPassword = reader("USRPassword")
_USREmail = reader("USREmail")
_USRTitle = reader("USRTitle")
_USRName = reader("USRName")
_USRRole = reader("USRRole")
If IsDBNull(reader("USRAgreedConditions")) = False Then
_USRAgreedConditions = reader("USRAgreedConditions")
End If
End If
reader.Close()
End Using
Catch ex As Exception
If Left(Err.Description, 68) = "An error has occurred while establishing a connection to the server." Then
Return "Cannot open database to logon"
Else
Return Err.Description
End If
End Try

the new version is much more elegant: -

Dim taUser As New NZGDBTableAdapters.NZGDB_UserTableAdapter()

Dim tbUser As NZGDB.NZGDB_UserDataTable = taUser.GetUserByUserid(userid)
If tbUser.Count <> 1 Then ' Anything other than 0 or 1 should be impossible
Return "User not found"
End If

Dim trUser As NZGDB.NZGDB_UserRow = tbUser(0)
_USRid = userid
_USRPassword = password
_USREmail = trUser.USREmail
_USRTitle = trUser.USRTitle
_USRName = trUser.USRName
_USRRole = trUser.USRRole
_USRAgreedConditions = trUser.USRAgreedConditions

However, there is a problem. The database field USRAgreedConditions is a Datetime field that can be null. The new version works perfectly when it is NOT null, but throws an exception: -

System.Data.StrongTypingException was unhandled by user code
Message="The value for column 'USRAgreedConditions' in table 'NZGDB_User' is DBNull."
Source="App_Code.wt2vzoc1"
...

There is no point in writing: -
If Not IsDBNull(trUser.USRAgreedConditions) Then
_USRAgreedConditions = trUser.USRAgreedConditions
End If
because the exception occurs within the automatically-created code in the data access layer. I tried changing the Nullvalue property of the field USRAgreedConditions in the table adaptor, but the only valid option is (Throw Exception) unless the field is a String. Of course USRAgreedConditions is a Datetime field, so I can't change the property.

It seems that my only options are: -
1. To stick with the old query-building code. But this doesn't give me the advantages of a proper 3-tier architecture
2. To change the generated code in wt2vzoc. This MUST be a bad idea - surely I should leave this code untouched. Besides, what if the table adaptor has to be regenerated when I change the table design?
3. Code a Try block within the business layer: -
Try
_USRAgreedConditions = trUser.USRAgreedConditions
Catch exAs Exception
_USRAgreedConditions =Nothing
EndTry

This seems to work OK, but seems less elegant than the original code in the old version: -
If IsDBNull(reader("USRAgreedConditions")) = False Then
_USRAgreedConditions = reader("USRAgreedConditions")
End If

Is there a better way?

Hi Robert my friend,

There is a better way. Change your query to list all of the fields within the table that you want and set the USRAgreedConditions field to a specific valid date if it is NULL via the ISNULL() function. The SQL below will return today's date in this field where it is NULL but you can change it to use any valid date value that you wish.

SELECT USRid, USRPassword, URSEmail, USRTitle, USRName, USRRole,

ISNULL(USRAgreedConditions, GetDate()) AS USRAgreedConditions

FROM NZGDB_User

For this reason, it is not good practice to rely solely on date fields for business logic. If this field being null indicates that the conditions have not been agreed yet, it would be better to have an additional BIT field that indicates Yes or No and then the USRAgreedConditions field would simply be a field containing the date information for those that are set to 1 (Yes).

Kind regards

Scotty

|||

Thanks Scotty, that's certainly a neater solution, and I'll progressively use it. Unfortunately my code is now riddled with Try/End try blocks to handle Null fields, so it will take some time to get rid of them all.

Your solution will certainly work in most, perhaps all, of my situations. However I wonder if there isn't a lurking problem. The point of NULL is to distinguish "No value" from "Default value". Sometimes you need to make the distinction in your code, and there is a risk that whatever value you choose for VALUE in
ISNULL(USRAgreedConditions, VALUE) AS USRAgreedConditions
will coincide with a real value. Getdate() won't do, and neither will 0 (which becomes 1/1/1900) as either could be a valid date in this application. In this case I can use Getdate()+20 and test for a future date (which is otherwise logically impossible), but I have to be careful to remember to check that my logic doesn't write these artificial values back to the database. I tried using
DIM USRAgreeConditions as NULLABLE (OF Datetime)
but found myself getting into a lot of trouble with errors popping up elsewhere.

It's a pity that the designers of the new NULLABLE feature didn't make the handling of all potentially-null fields consistent. Thus I find that there is little trouble with String fields (CHAR, VARCHAR, etc in the database), as I can simply set the column's Nullvalue property to Empty. It's just silly that I can't do this for GUID and Date (and any other type) fields. There are also some idiocies in the way that operator usage changes with NULLABLE fields. Enough raving!

Thanks for your help.

Regards, Robert

|||

Hi Robert,

You can also try this within the query. In addition to generating a valid date for your column, have another column that inidcates whether or not it was generated so that you can check this extra column in the code to determine how to process the date field: -

SELECT ISNULL(USRAgreedConditions, GetDate()) AS USRAgreedConditions,
CASE WHEN USRAgreedConditions IS NULL THEN 'Yes' ELSE '' END AS IsNullDate
FROM TableName

Kind regards

Scotty

|||

Thanks Scotty, yes that's exactly what I want. Something like this (returning the default value of Nothing and an indicator of whether it's a real null or not is what Microsoft should be doing automatically: in the meantime I guess this is the best we can do.

Thanks again, Robert

Handling merge replication foreign key conflicts

I have SQL CE clients replicating against a SQL Server 2005 db using merge replication. The DB has a table A and a table B, which has a foreign key to table A. It is common in my application for records in table A to be deleted on the server. I'm running into issues when a table A record has been deleted, but table B records were created on the clients which point to that record. When I sync I get a conflict because the table B records cannot be applied at the server, and the table A delete cannot be applied at the client.

What I would like to happen is to have the table B records on the client be deleted by the merge process, and to create a log of the event. I've looked into creating a business logic handler to do this, but I'm not sure what type of conflict this is (UpdateDeleteConflict or otherwise), and I'm not sure that deleting the table B records is something I can do in the business logic handler.

This seems like it would be a common problem in merge replication. I'm not locked into using a custom business logic handler at all. Any suggestions are welcome.

Thanks.
Any ideas on this?|||

presumably you have a CASCADE DELETE to trash the child ("table B") whenever you delete the parent ("tableA") rows

- whether DRI or by trigger

if you were using transactional replication the delete child-then-parent would be part of same atomic transaction

- thus maybe you would get both or neither.

possibly using transactional repl with immediate updating would avoid updates to such ghosts.

however using merge repl [yes I recognise its benefits!] will inevitably have to cope with unconstrained race conditions across the fabric. therefore you would have to cope with this situation

seems that you could

1. write custom resolvers, or

2. delve into the guts of the sprocs that actually execute the I/U/D statements [or use insteadof triggers]

3. simply add extra IsDeleted flag on each tableA,tableB table for soft-delete [actually do hard-delete at quiet time]

HTH

Dick

Handling merge replication foreign key conflicts

I have SQL CE clients replicating against a SQL Server 2005 db using merge replication. The DB has a table A and a table B, which has a foreign key to table A. It is common in my application for records in table A to be deleted on the server. I'm running into issues when a table A record has been deleted, but table B records were created on the clients which point to that record. When I sync I get a conflict because the table B records cannot be applied at the server, and the table A delete cannot be applied at the client.

What I would like to happen is to have the table B records on the client be deleted by the merge process, and to create a log of the event. I've looked into creating a business logic handler to do this, but I'm not sure what type of conflict this is (UpdateDeleteConflict or otherwise), and I'm not sure that deleting the table B records is something I can do in the business logic handler.

This seems like it would be a common problem in merge replication. I'm not locked into using a custom business logic handler at all. Any suggestions are welcome.

Thanks.
Any ideas on this?|||

presumably you have a CASCADE DELETE to trash the child ("table B") whenever you delete the parent ("tableA") rows

- whether DRI or by trigger

if you were using transactional replication the delete child-then-parent would be part of same atomic transaction

- thus maybe you would get both or neither.

possibly using transactional repl with immediate updating would avoid updates to such ghosts.

however using merge repl [yes I recognise its benefits!] will inevitably have to cope with unconstrained race conditions across the fabric. therefore you would have to cope with this situation

seems that you could

1. write custom resolvers, or

2. delve into the guts of the sprocs that actually execute the I/U/D statements [or use insteadof triggers]

3. simply add extra IsDeleted flag on each tableA,tableB table for soft-delete [actually do hard-delete at quiet time]

HTH

Dick

Handling merge replication foreign key conflicts

I have SQL CE clients replicating against a SQL Server 2005 db using merge replication. The DB has a table A and a table B, which has a foreign key to table A. It is common in my application for records in table A to be deleted on the server. I'm running into issues when a table A record has been deleted, but table B records were created on the clients which point to that record. When I sync I get a conflict because the table B records cannot be applied at the server, and the table A delete cannot be applied at the client.

What I would like to happen is to have the table B records on the client be deleted by the merge process, and to create a log of the event. I've looked into creating a business logic handler to do this, but I'm not sure what type of conflict this is (UpdateDeleteConflict or otherwise), and I'm not sure that deleting the table B records is something I can do in the business logic handler.

This seems like it would be a common problem in merge replication. I'm not locked into using a custom business logic handler at all. Any suggestions are welcome.

Thanks.
Any ideas on this?|||

presumably you have a CASCADE DELETE to trash the child ("table B") whenever you delete the parent ("tableA") rows

- whether DRI or by trigger

if you were using transactional replication the delete child-then-parent would be part of same atomic transaction

- thus maybe you would get both or neither.

possibly using transactional repl with immediate updating would avoid updates to such ghosts.

however using merge repl [yes I recognise its benefits!] will inevitably have to cope with unconstrained race conditions across the fabric. therefore you would have to cope with this situation

seems that you could

1. write custom resolvers, or

2. delve into the guts of the sprocs that actually execute the I/U/D statements [or use insteadof triggers]

3. simply add extra IsDeleted flag on each tableA,tableB table for soft-delete [actually do hard-delete at quiet time]

HTH

Dick

Handling large data values in Textboxes

Hi, Ive got a report using a List item that is vertically displaying the columns from a table. The problem I run into, is that some of the fields in this table contain large blocks of text where the users have entered comments and such.

I am using Textboxes to display this data.

So my report will look something like
--
Field label 1 Field value 1
Field label 2 Field value 2
Field label 3

<white space>

<page break>

Field value 3 > this is a big block of text
Field label 4 Field value 4
etc

It appears as though the report attempts to keep the contents of each textbox together even if that means breaking onto an entirely new page to do this. I would prefer for the data to flow more natrually instead where the page breaks in the middle of the data being displayed should it be too large to fit on the page it started on.

--
Field label 1 Field value 1
Field label 2 Field value 2
Field label 3 Field value 3 As much as can fit on this page

<page break>

Field value 3 > remaining data that broke over the page
Field label 4 Field value 4
etc

Any suggestions would be apprecaited.

Try setting KeepTogether property of your List to true so that both label and value of field3 may go to next page.

Shyam

|||

Thanks for the reply Shyam

While it would be nice to keep the label and value together, my major problem is that I want to have the report display whatever it can fit on each page of my large comment blocks, instead of breaking the entire comment onto a second page leaving me with a huge amount of white space. (see my example above for my desired output).

|||

I guess your requirement may not be possible though you can resort to some work around like the one I suggested above or even RepeatWith property of the label textbox and set to the value field name.

Shyam

Handling large amounts of data for reporting?

Hi all,
I have an application which logs a considerable amount of data. Each day,
we log about 50,000 to 100,000 rows of data.
We like to report on this data... currently I'm using a stored procedure to
calculate the statistics, however since this is an ad hoc, reports take a
while to generate.
So how do you guys handle large amounts of data? Is there a good way to
precalculate a a set of statistics to handle ad hoc queries (i.e. By Hour,
By Day, By W, By Month). Our application also provides near realtime
statistics... so precalculation has to be done on a continual basis. Does
.NET have any statistics classes that might help out with this sort of
thing? I don't think .NET performance counters will work since they don't
log persistent data.
Any ideas?
Thanks!
Lucas Tam (REMOVEnntp@.rogers.com)
Please delete "REMOVE" from the e-mail address when replying.
[url]http://members.ebay.com/aboutme/spot18/[/url]> So how do you guys handle large amounts of data?
Usually, you can run queries in the background that will aggregate to the
various dimensions you want to measure (e.g. ).
In one current system, we aggregate data by hour, then when someone wants to
do a "this w" or "this month" type of query, we simply union the
aggregated data (super fast) from before the last hour where stats were
rolled up, against the raw data (which is also relatively fast, using the
same kind of query that would "roll it up") and then apply group by to the
derived table. This way the majority of the crunching is already done, and
you are applying operations on raw data to only the last hour or less.
You might also look at Analysis Services or other BI products, but I think
the lag of rolling data into cubes/dimensions will fail your realtime
requirement.|||Do you need to report against a big table that grows 100K rows daily or
do you need to report against a table that has 100K rows?
100,000 rows of data is usually NOT a large amount on decent hardware.|||"AK" <AK_TIREDOFSPAM@.hotmail.COM> wrote in news:1125092267.248510.101290
@.g49g2000cwa.googlegroups.com:

> Do you need to report against a big table that grows 100K rows daily or
> do you need to report against a table that has 100K rows?
> 100,000 rows of data is usually NOT a large amount on decent hardware.
I need to report against a table that grows by ~100K row daily.
Lucas Tam (REMOVEnntp@.rogers.com)
Please delete "REMOVE" from the e-mail address when replying.
[url]http://members.ebay.com/aboutme/spot18/[/url]|||You may also want to explore partitioned views depending on the types
of queries you're looking at; we partion our data daily (about 800k
rows a day), and the optimizer knows wehre to look for the data we want
to retrieve.
Stu|||Summarized datasets, federated views, and OLAP cubes some of the basic
components, but the general solution is called data warehousing. Basically,
the concept is to export a pre-calculated and pre-summarized subset of data
in the OLTP system another database or server for the purpose of reporting
or analysis. The goal is to design a system that is optimized to suit your
own specific analytic needs, so the appropriate implementation details can
very from one warehouse to the next.
This document provides a good overview:
http://userfs.cec.wustl.edu/~cse530...ng-Combined.ppt
"Lucas Tam" <REMOVEnntp@.rogers.com> wrote in message
news:Xns96BEAE3D2946Bnntprogerscom@.127.0.0.1...
> Hi all,
>
> I have an application which logs a considerable amount of data. Each day,
> we log about 50,000 to 100,000 rows of data.
> We like to report on this data... currently I'm using a stored procedure
> to
> calculate the statistics, however since this is an ad hoc, reports take a
> while to generate.
> So how do you guys handle large amounts of data? Is there a good way to
> precalculate a a set of statistics to handle ad hoc queries (i.e. By Hour,
> By Day, By W, By Month). Our application also provides near realtime
> statistics... so precalculation has to be done on a continual basis. Does
> .NET have any statistics classes that might help out with this sort of
> thing? I don't think .NET performance counters will work since they don't
> log persistent data.
> Any ideas?
> Thanks!
> --
> Lucas Tam (REMOVEnntp@.rogers.com)
> Please delete "REMOVE" from the e-mail address when replying.
> [url]http://members.ebay.com/aboutme/spot18/[/url]|||"JT" <someone@.microsoft.com> wrote in
news:Oq9egaKrFHA.3452@.TK2MSFTNGP14.phx.gbl:

> The goal is to design a system that is optimized to suit your
> own specific analytic needs, so the appropriate implementation details
> can very from one warehouse to the next.
Hmmm... that's the problem.
Alot of the data our application generates is being used by external
clients, and they all have differing reporting requirements.
Not to mention, management seems to change the way queries are calculated
every so often.
Any ideas?
Would the Star/Snowflake structures be my best solution?
Thanks!
Lucas Tam (REMOVEnntp@.rogers.com)
Please delete "REMOVE" from the e-mail address when replying.
Newmarket Volvo Sucks! http://newmarketvolvo.tripod.com

Handling Languages in Reports

Hello,
I have a French and English user base and I would like to display reports in
the language of preference of the user. However, I am not certain as to a
best practice for this within VStudio. In addition, how can I make the
reportserver display a french GUI?
Any help would be apprecaited,
--
RyanHi Ryan,
My understanding of this issue is: You want to show different layout based
on different language in reporting services. If I misunderstood your
concern, please feel free to let me know.
You can use the locale settings on a client computer to determine how a
report appears to the user. For example, you can create a report that uses
a different query expression based on the locale information returned by
the User.Language global variable. The query may change to retrieve
localized information from a different column depending on the language
returned. You can also use an expression in the language settings of the
report or report items based on this variable.
While you can change the language settings of a report, you must be careful
about any display issues this may cause. For example, changing the locale
setting of the report can change the date format in the report, but it can
also change the currency format. Unless there is a conversion process in
place for the currency, this may cause the incorrect currency symbol to be
displayed in the report. To avoid this, set the language information on the
individual items that you want to change, or set the item with the currency
data to a specific language.
Here is a BOL article for your reference:
International Considerations for Reporting Services
http://msdn2.microsoft.com/en-us/library/ms156493(d=ide).aspx
Hope this will be helpful!
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.|||Thank-you Wei,
I will have a look at that and I may have a follow-up question for you at a
later date.
--
Ryan
"Wei Lu [MSFT]" wrote:
> Hi Ryan,
> My understanding of this issue is: You want to show different layout based
> on different language in reporting services. If I misunderstood your
> concern, please feel free to let me know.
> You can use the locale settings on a client computer to determine how a
> report appears to the user. For example, you can create a report that uses
> a different query expression based on the locale information returned by
> the User.Language global variable. The query may change to retrieve
> localized information from a different column depending on the language
> returned. You can also use an expression in the language settings of the
> report or report items based on this variable.
> While you can change the language settings of a report, you must be careful
> about any display issues this may cause. For example, changing the locale
> setting of the report can change the date format in the report, but it can
> also change the currency format. Unless there is a conversion process in
> place for the currency, this may cause the incorrect currency symbol to be
> displayed in the report. To avoid this, set the language information on the
> individual items that you want to change, or set the item with the currency
> data to a specific language.
> Here is a BOL article for your reference:
> International Considerations for Reporting Services
> http://msdn2.microsoft.com/en-us/library/ms156493(d=ide).aspx
> Hope this will be helpful!
> Sincerely,
> Wei Lu
> Microsoft Online Community Support
> ==================================================> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ==================================================> This posting is provided "AS IS" with no warranties, and confers no rights.
>|||Hi Ryan,
Thank you for your update.
I will close this issue now and if you have any questions about this issue,
please feel free to re-enter the newsgroup and post it under this thread. I
will reply as soon as possible.
Have a nice day!
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.