Showing posts with label handling. Show all posts
Showing posts with label handling. Show all posts

Monday, February 27, 2012

Handling world time with SQL and ASP.NET

Hi,

I would like to capture date time in my SQL during a certain operation. This piece of data will be displayed to an ASP.NET page.

When displaying the date time, I want it converted to the local time zone equivalent as set in the user's operating system.

What setup do I need on the SQL and the coding in the ASP.NET page?

Thanks.The following MSDN article may help you to understand date & time and also avoid inherant problems when dealing with different time zones.

Best Practices Using DateTime in the .NET

Hope this helps

Regards

Wayne Phipps|||Hi Wayne,

Thanks. That's a great article on issues related to saving and computing date time when daylight saving is relevant.

I'm also looking for saving date time in one time zone, and then display the date time relevant to the target user's system. I managed to find a web site that's dedicated to addressing this issue, SimpleTimeZone .NET Framework Class (http://www.michaelbrumm.com/simpletimezone.html). However I have not evaluated the .NET class concerned.

Thanks again for your reply.

Handling wildcard characters in query string

Hi

First interaction to the forum.
My Query is :

I had a User Management module in my application where I created a user with name

`~!@.#$@.%^&*()[_]+|}{":?><-=\[[]];',./

Now I have a functionality to search for the user existing. For that give the search string or a single character and it finds out all the records containing the character.

How do I go about it as the SP i created for it gives correct results except the following

1. Search for % - Gives all record
2. Search for _ - Gives all records
3. Search for [ - Gives NO record
4. Search for the whole string - Gives NO Record

I handeled a few issues

1. replaced [ by [[]
2. replaced _ by [_]

So issues 2 & 3 are resolved.

Tried replacing % by [%] but did not work

Could someone plz help

Thanks in advance
AshutoshYou could search the string with CHARINDEX function instead of using LIKE.

HTH|||You could search the string with CHARINDEX function instead of using LIKE.

HTH

Sorry but did not get your point as how this will help me get through

Ashutosh|||What is your current where predicate?|||What is your current where predicate?

WHERE FName like '%`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./%'

In the front end code I handle it as input params come with %...% qualifiers and ' is replaced by ''

On the sql side i replace _ by [_] and [ by [[]

Ashutosh|||Why use LIKE? Is there a reason you can't search for the literal value?

WHERE FName = '`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./'
Otherwise, if you are searching for a string portion, use charindex
WHERE CHARINDEX('`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./', FName) >0
HTH

EDIT - you would need to remove the code adding the extra square brackets as you are searching for literals now.|||Why use LIKE? Is there a reason you can't search for the literal value?

WHERE FName = '`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./'
Otherwise, if you are searching for a string portion, use charindex
WHERE CHARINDEX('`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./', FName) >0
HTH

EDIT - you would need to remove the code adding the extra square brackets as you are searching for literals now.

well i am pretty much reluctant to change the sp and UDF for that

if i could somehow get around it just by some sort of replace statement etc, that would be nice :)

Ashutosh|||well i am pretty much reluctant to change the sp and UDF for that

if i could somehow get around it just by some sort of replace statement etc, that would be nice :)


AshutoshThere's another reason - sargability. LIKE '%something%' isn't an efficient search if you are looking for the exact string match rather than a portion. It appears that you are looking for one bodge to correct for another.|||There's another reason - sargability. LIKE '%something%' isn't an efficient search if you are looking for the exact string match rather than a portion. It appears that you are looking for one bodge to correct for another.

Probably i have gone the harder way

I created the function to check the same. If there is an exact match, it passes the parameter as string only else it pads the % characters into it

Ashutosh|||well i am pretty much reluctant to change the sp and UDF for that

if i could somehow get around it just by some sort of replace statement etc, that would be nice :)


AshutoshIf you aren't keen on CHARINDEX and you need to search for portions then perhaps:

WHERE REPLACE(REPLACE(FName, '%', 'Wild1'), '_', 'Wild2') LIKE REPLACE(REPLACE('`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./', '%', 'Wild1'), '_', 'Wild2')
It ain't exactly pretty though.|||If you aren't keen on CHARINDEX and you need to search for portions then perhaps:

WHERE REPLACE(REPLACE(FName, '%', 'Wild1'), '_', 'Wild2') LIKE REPLACE(REPLACE('`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./', '%', 'Wild1'), '_', 'Wild2')
It ain't exactly pretty though.

I di the same but in that case it also replaces the leading and trailing % which I have from the code itself and hense the criteria changes.

%%% --> should come as %[%]%

but it comes as [%][%][%]

Ashutosh|||This is an SP yes? So '%`~!@.#$%^&*()[_]+|}{":?><-=\[[]];'',./%' is actually passed as a parameter - like @.myParam?

Handling very large XML result sets

I am writing a .NET based application to create large XML data files using
SQLXML classes and FOR XML EXPLICIT queries. What are some strategies I can
use to break up and process these large result sets? The overhead of issuing
multiple queries by breaking them up via WHERE clause filters isn’t the way I
want to go since my queries are very large and take significant time to
process within SQL server.
I am currently experiencing out of memory exceptions on some larger result
sets (~50-60 Mbytes total XML file size). My first attempt was using
SqlXmlCommand.ExecuteXmlReader and an XmlDocument via this snippet of code:
XmlReader xr = forXMLCommand.ExecuteXmlReader();
XmlDocument xd = new XmlDocument();
xd.Load(xr);
This throws a System.OutOfMemoryException on the call to ExecuteXmlReader
when the result set gets very large.
I also tried using SqlXmlCommand.ExecuteStream thinking I could read a
buffer of chars at a time to process these large result sets but this also
resulted in a System.OutOfMemoryException on the call to ExecuteStream:
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.Peek() >= 0)
{
c = new char[10000];
intnumRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
I have tried running my application on two different systems one with 1G of
main memory and the other a Win2K3 server with 8G of main memory. Both
systems seem to run out of memory at the same 50-60 Mb limit) Are there any
..NET memory settings I can tweak to give my .NET application more memory?
Thanks for your suggestions and ideas,
Scott
The XmlReader is a streaming interface which should not run out of memory
via the SqlXmlCommand.ExecuteStream method.
Loading into an XmlDocument however will cache the entire document into
memory.
Can you remove the following two lines from your repro and see if you are
still having the problem:
XmlDocument xd = new XmlDocument();
xd.Load(xr);
Thanks -
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad
|||Andrew,
That was exactly my thought as well, but ExecuteStream is throwing an
OutOfMemoryException. I am NOTcalling XmlDocument .Load in the code that
uses ExecuteStream.
Here is my full method I am using:
private void ExecuteSQLXMLCommandExecuteStream()
{
try
{
SqlXmlCommandforXMLCommand = new SqlXmlCommand("Provider=SQLOLEDB;DATA
SOURCE=Gibraltar;Initial Catalog=RDCModel;User ID=sa;Password=XXXX");
forXMLCommand.CommandType = SqlXmlCommandType.Sql;
StreamReadersr1 = new StreamReader(mQueryFileName);
stringquery = sr1.ReadToEnd();
sr1.Close();
query = query.Replace("\r\n", " ");
query = query.Replace("\t", " ");
forXMLCommand.CommandText = query;
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.Peek() >= 0)
{
c = new char[10000];
intnumRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
sw3.Close();
}
catch (SqlXmlException ex)
{
ex.ErrorStream.Position = 0;
string sqlErrorString;
sqlErrorString = new StreamReader(ex.ErrorStream).ReadToEnd();
Console.WriteLine(sqlErrorString);
RDCUtilities.WriteToLog(sqlErrorString);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
RDCUtilities.WriteToLog(ex.Message);
}
""Andrew Conrad"" wrote:

> The XmlReader is a streaming interface which should not run out of memory
> via the SqlXmlCommand.ExecuteStream method.
> Loading into an XmlDocument however will cache the entire document into
> memory.
> Can you remove the following two lines from your repro and see if you are
> still having the problem:
> XmlDocument xd = new XmlDocument();
> xd.Load(xr);
> Thanks -
> Andrew Conrad
> Microsoft Corp
> http://blogs.msdn.com/aconrad
>
|||Try using SqlXmlCommand.ExecuteToStream() instead of ExecuteStream.
Because of some technical limitations with COM interop, ExecuteStream
caches results.
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad

Handling very large XML result sets

I am writing a .NET based application to create large XML data files using
SQLXML classes and FOR XML EXPLICIT queries. What are some strategies I can
use to break up and process these large result sets? The overhead of issuin
g
multiple queries by breaking them up via WHERE clause filters isn’t the wa
y I
want to go since my queries are very large and take significant time to
process within SQL server.
I am currently experiencing out of memory exceptions on some larger result
sets (~50-60 Mbytes total XML file size). My first attempt was using
SqlXmlCommand.ExecuteXmlReader and an XmlDocument via this snippet of code:
XmlReader xr = forXMLCommand.ExecuteXmlReader();
XmlDocument xd = new XmlDocument();
xd.Load(xr);
This throws a System.OutOfMemoryException on the call to ExecuteXmlReader
when the result set gets very large.
I also tried using SqlXmlCommand.ExecuteStream thinking I could read a
buffer of chars at a time to process these large result sets but this also
resulted in a System.OutOfMemoryException on the call to ExecuteStream:
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.P() >= 0)
{
c = new char[10000];
int numRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
I have tried running my application on two different systems one with 1G of
main memory and the other a Win2K3 server with 8G of main memory. Both
systems seem to run out of memory at the same 50-60 Mb limit) Are there any
.NET memory settings I can tweak to give my .NET application more memory?
Thanks for your suggestions and ideas,
ScottThe XmlReader is a streaming interface which should not run out of memory
via the SqlXmlCommand.ExecuteStream method.
Loading into an XmlDocument however will cache the entire document into
memory.
Can you remove the following two lines from your repro and see if you are
still having the problem:
XmlDocument xd = new XmlDocument();
xd.Load(xr);
Thanks -
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad|||Andrew,
That was exactly my thought as well, but ExecuteStream is throwing an
OutOfMemoryException. I am NOTcalling XmlDocument .Load in the code that
uses ExecuteStream.
Here is my full method I am using:
private void ExecuteSQLXMLCommandExecuteStream()
{
try
{
SqlXmlCommand forXMLCommand = new SqlXmlCommand("Provider=SQLOLEDB;DATA
SOURCE=Gibraltar;Initial Catalog=RDCModel;User ID=sa;Password=XXXX");
forXMLCommand.CommandType = SqlXmlCommandType.Sql;
StreamReader sr1 = new StreamReader(mQueryFileName);
string query = sr1.ReadToEnd();
sr1.Close();
query = query.Replace("\r\n", " ");
query = query.Replace("\t", " ");
forXMLCommand.CommandText = query;
Stream s = forXMLCommand.ExecuteStream();
StreamWriter sw3 = new StreamWriter(mResultsFileName);
using (StreamReader sr = new StreamReader(s))
{
char[] c = null;
while (sr.P() >= 0)
{
c = new char[10000];
int numRead = sr.Read(c, 0, c.Length);
sw3.Write(c, 0, numRead);
}
}
sw3.Close();
}
catch (SqlXmlException ex)
{
ex.ErrorStream.Position = 0;
string sqlErrorString;
sqlErrorString = new StreamReader(ex.ErrorStream).ReadToEnd();
Console.WriteLine(sqlErrorString);
RDCUtilities.WriteToLog(sqlErrorString);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.WriteLine(ex.StackTrace);
RDCUtilities.WriteToLog(ex.Message);
}
""Andrew Conrad"" wrote:

> The XmlReader is a streaming interface which should not run out of memory
> via the SqlXmlCommand.ExecuteStream method.
> Loading into an XmlDocument however will cache the entire document into
> memory.
> Can you remove the following two lines from your repro and see if you are
> still having the problem:
> XmlDocument xd = new XmlDocument();
> xd.Load(xr);
> Thanks -
> Andrew Conrad
> Microsoft Corp
> http://blogs.msdn.com/aconrad
>|||Try using SqlXmlCommand.ExecuteToStream() instead of ExecuteStream.
Because of some technical limitations with COM interop, ExecuteStream
caches results.
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad

Handling variables

I have a script task that has a ReadWriteVariable = FilePath. I am doing the following below to calculate the FilePath variable. Instead of the output being C:\Rapagator\Downloaded\RETS_Search_ResidentialProperty_20060403094343.txt, it just shows RETS_Search_ResidentialProperty_20060403094343.txt. Any ideas how to resolve this problem?

Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Imports Microsoft.SqlServer.Dts.Runtime.Wrapper

Public Class ScriptMain
Inherits UserComponent
Public GetFilePath As String = ""

Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
GetFilePath = "C:\Rapagator\Downloaded" + Row.FileName
End Sub
Public Overrides Sub PostExecute()
Variables.FilePath = GetFilePath
MyBase.PostExecute()
End Sub

End Class

How do you know the variable value is only the filename part you state? You cannot debug inside the Script Component, so perhaps add a MessageBox or log an event to write out the values you have just to be sure. Perhaps set a breakpoint as well and add the variable to the watch window. You do know that the Variables window only shows design-time values during runtime, even when completed or stopped on a breakpoint. Use the watch window to get real values. This does seem rather strange, are you only expecting one row to flow through this component; otherwise you will only store the last row's value in the variable. Is that what you want?


Does the filename start with a "\" ? If not the path looks invalid - C:\Rapagator\DownloadedMYFILE. Top tip, use System.IO.Path.Combine(x, y) as that will sort out path qualifiers for you.

Handling UNC path names

What is the best handling of a UNC Pathname?

What is the max length of a path name?
Which is the best type of field in SQL Server to use for a path name?

Thanks

I would store this in a VARCHAR field with the appropiate value.

HTH, Jens Suessmeyer,

|||

What is the max length of the path & filename? It used to be 255 chars but I believe it has grown since then.

|||

Maximum name of a UNC path is 260 unicode characters (MAX_PATH defined in windows.h I believe). Search google for MAX_PATH to see lots of people talking about this issue.

So a NVARCHAR(260) should be sufficient unless you allow the \\?\ prefix that bypasses normal MAX_PATH length. If you only have one locale you use for paths, then you can potentially use VARCHAR(260).

Handling Transaction

hi friends,

I like to put a set of sql statements under a transaction and wish the sql server to take care of commit / roll back the entire set depending upon the success/failure of the statements in the set. I want the whole set is either to success or to failure.

When I go through the docs, i find that SQL Server 2000 operates three transaction modes:
Autocommit transactions : Each individual statement is a transaction.

Explicit transactions : Each transaction is explicitly started with the BEGIN TRANSACTION statement and explicitly ended with a COMMIT or ROLLBACK statement.

Implicit transactions: A new transaction is implicitly started when the prior transaction completes, but each transaction is explicitly completed with a COMMIT or ROLLBACK statement.

As one can see, it seems that it is not possible to define "atomic compound sql statements". Even if i used explicit transaction, it is not possible to achieve this, since i couldn't find a mechanism to handle errors for a group of statements.

I wonder how to write atomic compound sql statement in the sql server.
can anybody please help me on this...

JakeThe only solution that I am aware of is enclosing the inserts/updates into one transaction and check each for success. If one fails; rollback, if all succeed; commit.|||hi Kaiowas,

thanks for the suggestion. instead of checking for success & failure for each line, i found another way of doing this.
If we put the set of statements in a BEGIN TRANSACTION ... COMMIT TRANSACTION block, the commit transaction will get execute iff all the statements succeed. Otherwise all the statements get rolled back. This solves my problem.

But still there is no way to handle exception for a block of statements.
Does Yukon release have support for this? any idea?

Jake

Originally posted by Kaiowas
The only solution that I am aware of is enclosing the inserts/updates into one transaction and check each for success. If one fails; rollback, if all succeed; commit.

Handling SQLServerCE DataBase By windows application ?

Hi

i have the following problem :

i attempted to connect with a SQLServerCE DataBase

to Insert and update its rows, but i noticed that i want the reference :

System.Data.SqlServerCe

i went to (Add references) but i didn't find it ..

what should i do to break this problem ?

please help me !

Hi Imad

I have moved your treat to the Devices team who should be able to help you

mairead

PM, TS Data

|||

Hello and sorry for the delayed reply.

Just a question first - are you attemping to open a SqlServerCe database on your desktop PC or on a mobile device (like a Pocket PC / Smartphone)? This is important because we want to add the correct DLL to your project.

If you're opening the database on the desktop, then we can find the System.Data.SqlServerCe.DLL in the directory where DEVENV.EXE is. That would be something like C:\Program Files\Microsoft Visual Studio 8\Common7\IDE

If you're opening the database on a device, then we can find the System.Data.SqlServerCe.DLL in the Mobile SDK folder. That would be something like C:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile\v3.0

If either of the DLLs are missing then it means your SQLServerCe SDK is not installed on your machine. The easiest way to fix this is to REPAIR the visual studio installation. This will re-install the SDK and ensure the DLLs are present on the machine.

Please let me know how it works out,

Kind regards,

Carlton Lane

Microsoft Visual Basic Team

|||

I am trying to use desktop application to open a Sqlserverce database that located on the PDA. but it throw this exception:

Unable to load DLL 'sqlceme30.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)

I checked and see System.Data.SqlServerCe.DLL exists under:

D:\Program Files\Microsoft Visual Studio 8\Common7\IDE

D:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile\v3.0

Actually it throw that exception no matter what database I am trying to open. here is my code:

connStr = "Data Source =""Mobile Device\Program Files\Barcode_PDA\pda.sdf"";"
conn1 = New SqlServerCe.SqlCeConnection(connStr) < this generates the error

Thanks for your help!

|||

Hi Alex,

I'm sorry but the scenairo of opening a database on a device remotely from a desktop PC isnt supported by the SQLCE engine. Eventually, you'll get an error about the connection string being invalid. This is because the engine is targeting local data scenarios - that is where the Application and Database reside on the same machine, in the same process. This scenario starts to touch on client / server scenarios which currently arent supported.

Your current error about the missing dlls is telling us that the application is starting but cant find the engine. These dlls are found next to the System.Data.SqlServerCe.dll file. The ones in the Common7\IDE are for your PC. THe ones in SmartDevices\SDK are for the device. For a PC application, copy the ones from Common7\IDE into the executing directory of your application. But again, after you get pass this error, you will eventually get an error about the connection string being invalid because this scenario isnt supported.

HTH and good luck,

Carlton

|||

hi everybody

first thank you for helping me ..

but i want to tell you that i have solved my problem easily by installing sql server everywhere edition CTP

to install sql serverce tools and then all dll's have been loaded correctly ..

Handling SQLServerCE DataBase By windows application ?

Hi

i have the following problem :

i attempted to connect with a SQLServerCE DataBase

to Insert and update its rows, but i noticed that i want the reference :

System.Data.SqlServerCe

i went to (Add references) but i didn't find it ..

what should i do to break this problem ?

please help me !

Hi Imad

I have moved your treat to the Devices team who should be able to help you

mairead

PM, TS Data

|||

Hello and sorry for the delayed reply.

Just a question first - are you attemping to open a SqlServerCe database on your desktop PC or on a mobile device (like a Pocket PC / Smartphone)? This is important because we want to add the correct DLL to your project.

If you're opening the database on the desktop, then we can find the System.Data.SqlServerCe.DLL in the directory where DEVENV.EXE is. That would be something like C:\Program Files\Microsoft Visual Studio 8\Common7\IDE

If you're opening the database on a device, then we can find the System.Data.SqlServerCe.DLL in the Mobile SDK folder. That would be something like C:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile\v3.0

If either of the DLLs are missing then it means your SQLServerCe SDK is not installed on your machine. The easiest way to fix this is to REPAIR the visual studio installation. This will re-install the SDK and ensure the DLLs are present on the machine.

Please let me know how it works out,

Kind regards,

Carlton Lane

Microsoft Visual Basic Team

|||

I am trying to use desktop application to open a Sqlserverce database that located on the PDA. but it throw this exception:

Unable to load DLL 'sqlceme30.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)

I checked and see System.Data.SqlServerCe.DLL exists under:

D:\Program Files\Microsoft Visual Studio 8\Common7\IDE

D:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile\v3.0

Actually it throw that exception no matter what database I am trying to open. here is my code:

connStr = "Data Source =""Mobile Device\Program Files\Barcode_PDA\pda.sdf"";"
conn1 = New SqlServerCe.SqlCeConnection(connStr) < this generates the error

Thanks for your help!

|||

Hi Alex,

I'm sorry but the scenairo of opening a database on a device remotely from a desktop PC isnt supported by the SQLCE engine. Eventually, you'll get an error about the connection string being invalid. This is because the engine is targeting local data scenarios - that is where the Application and Database reside on the same machine, in the same process. This scenario starts to touch on client / server scenarios which currently arent supported.

Your current error about the missing dlls is telling us that the application is starting but cant find the engine. These dlls are found next to the System.Data.SqlServerCe.dll file. The ones in the Common7\IDE are for your PC. THe ones in SmartDevices\SDK are for the device. For a PC application, copy the ones from Common7\IDE into the executing directory of your application. But again, after you get pass this error, you will eventually get an error about the connection string being invalid because this scenario isnt supported.

HTH and good luck,

Carlton

|||

hi everybody

first thank you for helping me ..

but i want to tell you that i have solved my problem easily by installing sql server everywhere edition CTP

to install sql serverce tools and then all dll's have been loaded correctly ..

Handling SQL Exception

I'm unsure how to handle an SQL Exception correctly when the database is unavailable/offline.

I have my aspx file with the C# code-behind, but all of the SQL stuff is done in a separate code file in the App_Code directory.
E.g.

CODE-BEHIND
DatabaseModifier.deleteUser(username);

DATABASEMODIFIER.cs
public static void deleteUser(string username)
{
SqlConnection conn = SqlLogin.SqlConnect;
SqlCommand command = new SqlCommand("DELETE FROM <table> WHERE Username = '" + username + "'", conn);
conn.Open()
command.ExecuteNonQuery();
conn.Close()
}

Now, that code works perfectly, however if the database I'm connecting to is offline, an SQLException is thrown and because the SQL is handled in my DatabaseModifier class, I'm not sure how to handle it correctly.
If I use a Try/Catch block in my code-behind, it doesn't get thrown because the error occurs in my DatabaseModifier class. If I use a Try/Catch block in my DatabaseModifier class, what can I put in the catch block that will inform the user of the database being offline and/or how can I perform a url redirection?

Any help is greatly appreciated.

You can write the connection open code in the database modifier class with try-catch block, and in the catch block you can just throw the exception being written. Again the the page code behind you can write the database activity in a try-catch block, and here in this catch you'll receive any exception which may be generated in the database modifier class. As the code in the database modifier class does nothing but to throw the exception, you can handle them in the page code.

Hope this will help.

|||

So in the DatabaseModifier class:
Try
{
code
}
Catch(SqlException)
{
//nothing in here
}

In code behind:
Try
{
code
}
Catch(SqlException e)
{
//output e.Message to user?
}

|||

Something like that:

In database modifier:Try{ code}Catch(SqlException exp){throw exp;}In code behind:Try{ code}Catch(SqlException e){//output e.Message to user?}

Now, you can see that the db modifier just throws any exceptions which it receives during any operation (this is fair also because you don't have any user out put system set in the db modifier class), and the page code is set up to handle the exceptions thrown by the db modifier as well as any other exceptions during the code execution of its own.

Hope this will help

handling special characters using oledb to oracle

hi,

i am using oledb to connect to oracle.
i want to know if there is a way to handle different character sets in this type of connection. for sql to sql, i have been using auto translate in the connection string.
what about for sql oledb to oracle? how can i make sure that the data from sql to oracle is transferred as is?

many thanks.

You might be better off asking this on the Data Access forum:

http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=87&SiteID=1

-Jamie

Handling Schema Changes

What is the best way to handle schema changes as database needs change.
What I need to find out is
what is the best way to write database interface code in order to be
flexible to schema changes. Would having stored procedures as an interface
mechanism be best so that old app software will still be compatible? It is
impractical for us to upgrade all apps simultaneously, so if a new column is
added to a table, old apps will not fill in this column while newer apps who
support it will.

ThanksUse stored procedures as your data access layer. Supporting multiple
applications is much easier that way because the SPs can insulate the
application from underlying schema changes. This is just one of the
benefits of using SPs.

--
David Portas
SQL Server MVP
--

Handling provider-level errors from .Net SqlClient Data Provider

Hi,

After reading this helpful blog entry:

http://blogs.msdn.com/sql_protocols/archive/2007/05/13/sql-network-interfaces-error-26-error-locating-server-instance-specified.aspx

I think this might be a good place to ask the following question.

I am writing the error handling code for my data access layer for a web application. I am using the Enterprise Library Data Access Application Block. Although this supports generic database connections, I realized that I need to handle errors specific to each database type. Microsoft SQL is the only database type I am using for now, so I am using a try...catch (SqlException e).

In testing my code, I intentionally changed the instance name in web.config to a name that does not exist. I get the very popular error 26 - Error Locating Server/Instance Specified. This is returned as a SqlException, but the SqlError.Number property is set to -1.

    Am I getting "-1" because the provider hasn't actually connected to SQL yet, so it doesn't have an actual SQL error number? Can I assume that (SqlError.Number == -1) is always a fatal, provider-level connection exception? Will the provider ever use another SqlError.Number of its own? Or do all numbers besides -1 come from the SQL sysmessages table?. Is there a comprehensive list of what exceptions might be raised by the SqlClient provider, including #26?

The reason for all the questions is that in a web application, I want to prevent the end-user from seeing the "real" exception if it has to do with configuration errors. However, maybe there are other errors that the user should see and handle? It's hard to know without a full list of SqlClient provider errors, along with the SqlError.Number that each error maps to.

Thanks and regards,

Mark

I can only answer 1. SqlError.Number corresponds to the SQL Server message id from sys.messages table on the server, so if it is not -1 it indicates an error message coming from the server.

I believe it will be impossible to get an exhaustive list of errors numbers for each error on the server. There will always be newer error messages with new SQL Servers and they number in thousands, check out the sys.messages table on SQL Server 2005.

Thanks

Waseem

|||

Waseem Basheer - MSFT wrote:

I believe it will be impossible to get an exhaustive list of errors numbers for each error on the server. There will always be newer error messages with new SQL Servers and they number in thousands, check out the sys.messages table on SQL Server 2005.

Thanks Waseem. I'm familiar with sysmessages. For messages coming directly from SQL server, sysmessages is all I need.

I am looking for a comprehensive list of SqlClient messages that are NOT in sysmessages, i.e. the errors that are raised directly by the .NET SqlClient provider. Hopefully this is a much more limited number of messages.

Regards,

Mark

|||

Hi, mcbsys

Your question is more specific to .NET Data access, so, I recommend you post your question to the forum where many experts can help you quickly.

http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=45&SiteID=1

HTH

Ming.

|||

MING LV wrote:

Your question is more specific to .NET Data access, so, I recommend you post your question to the forum where many experts can help you quickly.

http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=45&SiteID=1

Good idea, I will try that.

Thanks,

Mark

handling parameters that could be missing

I have a search app that I'm developing in ASP.NET
It has 5 parameters
MRN, Name, Sex, DOB, SSN
All or any combination can be used.
what is the best way to handle the non-supplied parameters?
i.e. if the user selects only supplies 'Name' and 'MRN'
I want the SQL for the stored procedure to read as if it were just 2
parameters:
(but have one stored procedure that has parameter holders for all 5)
Select MRN, Name, Sex, DOB, SSN
From Membership
WHERE MRN = 123456 AND Name like 'Johns%'
The way I have been handling this is to rewrite a different SQL statement
for every possible situation, and pass that string to the adapter. But I ca
n
see that I'm not taking advantage of the processing that SQL server could do
in a stored procedure, and for 1.4 Million records, my app is just timing ou
t
all the time.
This must be a common issue. Could someone show me the ideal way to handle
this?Have a look at
http://www.sommarskog.se/dyn-search.html
Roji. P. Thomas
Net Asset Management
http://toponewithties.blogspot.com
"jonefer" <jonefer@.discussions.microsoft.com> wrote in message
news:0FFF3A3A-AB66-4A96-B63E-73E937F7A164@.microsoft.com...
>I have a search app that I'm developing in ASP.NET
> It has 5 parameters
> MRN, Name, Sex, DOB, SSN
> All or any combination can be used.
> what is the best way to handle the non-supplied parameters?
> i.e. if the user selects only supplies 'Name' and 'MRN'
> I want the SQL for the stored procedure to read as if it were just 2
> parameters:
> (but have one stored procedure that has parameter holders for all 5)
> Select MRN, Name, Sex, DOB, SSN
> From Membership
> WHERE MRN = 123456 AND Name like 'Johns%'
> The way I have been handling this is to rewrite a different SQL statement
> for every possible situation, and pass that string to the adapter. But I
> can
> see that I'm not taking advantage of the processing that SQL server could
> do
> in a stored procedure, and for 1.4 Million records, my app is just timing
> out
> all the time.
> This must be a common issue. Could someone show me the ideal way to
> handle
> this?
>|||"jonefer" wrote:
> I have a search app that I'm developing in ASP.NET
> It has 5 parameters
> MRN, Name, Sex, DOB, SSN
> All or any combination can be used.
> what is the best way to handle the non-supplied parameters?
> i.e. if the user selects only supplies 'Name' and 'MRN'
> I want the SQL for the stored procedure to read as if it were just 2
> parameters:
> (but have one stored procedure that has parameter holders for all 5)
> Select MRN, Name, Sex, DOB, SSN
> From Membership
> WHERE MRN = 123456 AND Name like 'Johns%'
> The way I have been handling this is to rewrite a different SQL statement
> for every possible situation, and pass that string to the adapter. But I
can
> see that I'm not taking advantage of the processing that SQL server could
do
> in a stored procedure, and for 1.4 Million records, my app is just timing
out
> all the time.
> This must be a common issue. Could someone show me the ideal way to handl
e
> this?
Have you tried something along the line of:
create procedure p_MyProc
@.pMyParm1 int = NULL, @.pMyParm2 datetime = NULL,
@.pMyParm3 varchar(128) = NULL, @.pMyParm4 bit = NULL
as
if @.pMyParm1 IS NULL do something ...
if @.pMyParm2 IS NULL do something ...
if @.pMyParm3 IS NULL do something ...
if @.pMyParm4 IS NULL do something ...|||Thank you. That was excellent.
"Roji. P. Thomas" wrote:

> Have a look at
> http://www.sommarskog.se/dyn-search.html
> --
> Roji. P. Thomas
> Net Asset Management
> http://toponewithties.blogspot.com
>
> "jonefer" <jonefer@.discussions.microsoft.com> wrote in message
> news:0FFF3A3A-AB66-4A96-B63E-73E937F7A164@.microsoft.com...
>
>|||I implemented my search with the 5 parameters using your Dynamic SQL example
.
But because of the way I want to present the final results... it always
times out.
So I will list what I'm trying to get in my final results as well as what
the Stored procedure looks like (perhaps you can help me tweak it)
What I want in the final selection:
SELECT DISTINCT MRN, MemNAME AS Member, DOB, SEX, SSN, GROUP, SGR,
[FROM-DT], [THRU-DT]
FROM qMembershipSelect AS Y
GROUP BY MRN, MemNAME, DOB, SEX, SSN, GROUP, SGR, [FROM-DT], [THRU-DT]
HAVING [THRU-DT] Is Null Or [THRU-DT] In (Select Max(X.[Thru-Dt]) From
qMembershipSelect As X Where X.MRN = Y.MRN And X.MRN Not in(Select Z.MRN
From qMembershipSelect As Z Where Z.[THRU-DT] Is Null))
ORDER BY MemNAME, [FROM-DT] DESC;
How I implemented this in the Stored Procedure:
CREATE PROCEDURE qMemberSelect
@.MRN int = NULL,
@.MemNAME nvarchar(40) = NULL,
@.DOB datetime = NULL,
@.SSN nvarchar(9) = NULL,
@.SEX nvarchar(1) = NULL,
@.debug bit = 0 AS
DECLARE @.sql nvarchar(4000),
@.paramlist nvarchar(4000)
SELECT @.sql =
'SELECT
[MRN],[MemName],[DOB],[SEX],[SSN],[GROUP
],[SGR],[FROM-DT],[THRU-DT] FROM
MEMBERSHIP AS Y
WHERE 1=1'
if @.MRN IS NOT NULL
SELECT @.sql = @.sql + ' AND MRN = @.xMRN'
if @.MemNAME IS NOT NULL
SELECT @.sql = @.sql + ' AND MemName like @.xMemNAME + ''%'''
if @.DOB IS NOT NULL
SELECT @.sql = @.sql + ' AND DOB = @.xDOB'
if @.SSN IS NOT NULL
SELECT @.sql = @.sql + ' AND SSN = @.xSSN'
if @.SEX IS NOT NULL
SELECT @.sql = @.sql + ' AND SEX = @.xSEX'
if @.debug = 1
PRINT @.sql
SELECT @.sql = @.sql + ' GROUP BY MRN, MemNAME, DOB, SEX, SSN, [GROUP], SGR,
[FROM-DT], [THRU-DT]
HAVING len([THRU-DT])= 0
Or [THRU-DT] In (Select Max(X.[Thru-Dt]) From
Membership As X Where X.MRN = Y.MRN And X.MRN
Not in(Select Z.MRN From Membership As Z Where len(Z.[THRU-DT])=0))
ORDER BY MemNAME, [FROM-DT] DESC'
SELECT @.paramlist = '@.xMRN int,
@.xMemName nvarchar(40),
@.xDOB datetime,
@.xSSN nvarchar(9),
@.xSEX nvarchar(1)'
EXEC sp_executesql @.sql, @.paramlist,
@.MRN, @.MemNAME, @.DOB, @.SSN, @.SEX
But there must be a way to do this in two phases because trying it all in
one pull is just not efficient.
"Roji. P. Thomas" wrote:

> Have a look at
> http://www.sommarskog.se/dyn-search.html
> --
> Roji. P. Thomas
> Net Asset Management
> http://toponewithties.blogspot.com
>
> "jonefer" <jonefer@.discussions.microsoft.com> wrote in message
> news:0FFF3A3A-AB66-4A96-B63E-73E937F7A164@.microsoft.com...
>
>|||How about...
SELECT MRN, Name, Sex, DOB, SSN
FROM dbo.Membership
WHERE (MRN = @.MRN or @.MRN is null)
AND (Name like @.Name or @.Name is null)
AND (Sex = @.Sex or @.Sex is null)
etc.
TIP: If there are not user specific tables, always specify the owner of the
object so that it is not ambiguous and SQL Server does not have to check for
<userid>.Membership and if not found, check for dbo.Membership. This applie
s
for referencing tables, views, and stored procedures. Also, never prefix a
stored procedure name with sp_ or xp_ as that can cause unexpected side
effects.
Just my two cents,
Joe
"jonefer" wrote:

> I have a search app that I'm developing in ASP.NET
> It has 5 parameters
> MRN, Name, Sex, DOB, SSN
> All or any combination can be used.
> what is the best way to handle the non-supplied parameters?
> i.e. if the user selects only supplies 'Name' and 'MRN'
> I want the SQL for the stored procedure to read as if it were just 2
> parameters:
> (but have one stored procedure that has parameter holders for all 5)
> Select MRN, Name, Sex, DOB, SSN
> From Membership
> WHERE MRN = 123456 AND Name like 'Johns%'
> The way I have been handling this is to rewrite a different SQL statement
> for every possible situation, and pass that string to the adapter. But I
can
> see that I'm not taking advantage of the processing that SQL server could
do
> in a stored procedure, and for 1.4 Million records, my app is just timing
out
> all the time.
> This must be a common issue. Could someone show me the ideal way to handl
e
> this?
>|||Joe
If you use sp_XXXXXX SQL server looks into MAster database first even if you
mentioned the database.sp_XXX... So, we are looking unnecessarily in master
database . that is why most folks suggest do not use sp_XXXX .I never heard
of side effects... If you know any please share with us.
--
SQL SERVER DBA
"Joe from WI" wrote:
> How about...
> SELECT MRN, Name, Sex, DOB, SSN
> FROM dbo.Membership
> WHERE (MRN = @.MRN or @.MRN is null)
> AND (Name like @.Name or @.Name is null)
> AND (Sex = @.Sex or @.Sex is null)
> etc.
> TIP: If there are not user specific tables, always specify the owner of th
e
> object so that it is not ambiguous and SQL Server does not have to check f
or
> <userid>.Membership and if not found, check for dbo.Membership. This appl
ies
> for referencing tables, views, and stored procedures. Also, never prefix
a
> stored procedure name with sp_ or xp_ as that can cause unexpected side
> effects.
> Just my two cents,
> Joe
> "jonefer" wrote:
>

Handling out-dated transaction records

For performance issue, I believe many program should have a house-keeping procedure to clean up transaction history. Is there any best practice to perform this? Or should it be done simply by moving transaction data from the transaction table into a history table? Any better or consideration that I should be concerned of?Depends on your environment. Many places would like to keep records for atleast 2 years, but if you are in the medical field and some of your stuff may fall under HIPAA, then you need to keep the records I believe for like 7+ years. In those cases, I usually have an audit table that contains all the transaction data (It's written to via a trigger). Then I keep the transaction table fairly clean (only recent/open/pending), and if you need historical data, then I run my queries off the audit table instead which is never purged and has a good set of indexes on it.

Handling optional parameters

On many reports I have optional parameters, for example if the employee is filled in it will run for that employee but if it's null it runs for all employees. Which is the best way to handle this?

The two options I am currently looking at are:

select * from employee where (employee.id = @.EmpID or @.EmpID is Null)

select * from employee where isnull(@.empID, employee.id) = employee.id

Anyone else have a different solution?

Currently we use the OR approach and I wanted to see if anyone had any thoughts before switching to using IsNull, which seems a bit better.

OR can be slow. I always use the ISNULL approach.

|||

Another way to do this is, you can include "*(ALL) " in you employee dataset and you can default the parameter to "*(ALL)". Now pass back the selected value to the Stored Procedure

IF (@.EMPID= '*(ALL)'

select * from Employee

Else

Select * form Employee where employee.id=@.EMPID

This way it is more intuitive to End Users.

|||

The problem with using the if (@.EmpID is null) is that then you need to maintain two queries. And for some reports that have 3 or more optional parameters this could mean having 8 or more queries to maintain just for optional parameters. We also thought about using an expression for the query but some of the clients like to use the generic query view if they edit the report so this makes that option unavailable.

I think we will be using the isnull option for our solution.

Thanks

|||

I thought this blog post was useful for this topic. You can combine approaches if you want.

http://bloggingabout.net/blogs/egiardina/archive/2007/06/26/sql-server-reporting-services-optional-parameters.aspx

|||

Techquest wrote:

Another way to do this is, you can include "*(ALL) " in you employee dataset and you can default the parameter to "*(ALL)". Now pass back the selected value to the Stored Procedure

IF (@.EMPID= '*(ALL)'

select * from Employee

Else

Select * form Employee where employee.id=@.EMPID

This way it is more intuitive to End Users.

Not everyone uses stored procs to serve reports.

|||

Found an issue using the IsNull approach. If the optional parameter can be null in the table it will not return the null values.

EX. Middle name is a nullable field

select * from Employee where MiddleName = IsNull(@.MiddleName, MiddleName)

This won't return all values when the parameter is null, will return all values that have any non null MiddleName.

Think it may be safer to use the OR approach now because other people that use modify reports not familiar with this may use IsNull is cases it should not be used.

Anyone have any suggestions around this? I know most of the time the optional parameter does not go against a nullable field but could cause a lot of problems if you did it by mistake.

Thanks,

Kevin

|||

Yeah, avoid nullable fields Smile

In case you do have them though you could use:

select * from Employee where COALESCE(MiddleName, '') = COALESCE(@.MiddleName, MiddleName, '')

COALESCE is basibcally like ISNULL but it takes any number of parameters rather than just 2 and returns the first non-null value.

|||

COALESCE functions differently than IsNull though. If you look at the execution plan, these both are the same:

select * from employee where employee.id = COALESCE(@.EmpID, employee.id)

select * from employee where (employee.id = @.EmpID or @.EmpID is Null)

This defeats the purpose of using IsNull for the better exectution with some indexes.

Handling of xml data within Oracle 9/10 and sql Server 2005

Hi there,
I would like to know the best way to handle Xml data stored on Oracle and
Sql Server 2005 using XQuery/XPath and AdoNet. Ideally, the C# code should
not be different for Oracle and Sql Server. Any hint ?
Regards
Sql Server 2005/2000 has a OPENXML statement.
When I used Oracle 9 (while back), I was very disappointed in its xml
capabilities.
First question:
Are you pushing xml into the db, or reading it out?
It looks like reading it out, but wanted to make sure.
"Oriane" <oriane@.guermantes.fr> wrote in message
news:BD4ABE46-921B-4C70-92E4-F949528428AC@.microsoft.com...
> Hi there,
> I would like to know the best way to handle Xml data stored on Oracle and
> Sql Server 2005 using XQuery/XPath and AdoNet. Ideally, the C# code should
> not be different for Oracle and Sql Server. Any hint ?
> Regards
|||Hi Sloan,
"sloan" <sloan@.ipass.net> a crit dans le message de
news:uZvmoiYuHHA.536@.TK2MSFTNGP06.phx.gbl...
> Sql Server 2005/2000 has a OPENXML statement.
> When I used Oracle 9 (while back), I was very disappointed in its xml
> capabilities.
And what about Oracle 10g ?
> First question:
> Are you pushing xml into the db, or reading it out?
Reading and writing, but mostly reading...
> It looks like reading it out, but wanted to make sure.
|||Oracle 10? No idea.
Here is what I found.
I now remember how much I hated Oracle XML.
Good luck. I have no more I can offer beyond this.
/* Actual Logic of This procedure */
--this is just a check to make sure it can be cast as a XMLTYPE document
SELECT sys.xmltype.createxml(in_errorlogxml) INTO xmlvar FROM dual;
convertedBlobToXMLType := XMLTYPE(in_errorlogxml);
SELECT SEQ_ErrorLogID.NEXTVAL INTO ErrorID FROM DUAL;
--There is an issue with referring to the XML directly (as a cast clob
object)
--This is a workaround ... by putting the value into a temp database
--and then referring to that value, it will work.
delete from XMLTempHolderTable;
commit;
insert into XMLTempHolderTable values
(ErrorID,convertedBlobToXMLType);--in_errorlogxml);
commit;
INSERT INTO ERRORLOG (
ErrorID,
MachineName,
TimeStampValue,
FullName,
AppDomainName,
ThreadIdentity,
WindowsIdentity,
ExceptionType,
Message,
TargetSite,
Source,
StackTrace,
EntryDateTime
)
SELECT ErrorID,
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.MachineName'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.TimeStamp'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.FullName'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.AppDomainName'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.ThreadIdentity'),
extractValue(value(d),'ExceptionInformation/AdditionalInformationProperty/@.ExceptionManager.WindowsIdentity'),
extractValue(value(d),'/ExceptionInformation/Exception/@.ExceptionType'),
extractValue(value(d),'/ExceptionInformation/Exception/@.Message'),
extractValue(value(d),'/ExceptionInformation/Exception/@.TargetSite'),
extractValue(value(d),'/ExceptionInformation/Exception/@.Source'),
extractValue(value(d),'/ExceptionInformation/Exception/StackTrace'),
sysdate
--FROM table (xmlsequence(extract(XMLTYPE.createXML(in_errorlog xml),
'/ExceptionInformation'))) d; --Does not work
--FROM XMLTempHolderTable tmp,table
(xmlsequence(extract(xmltype(tmp.XMLValue), '/ExceptionInformation')))
; --if the XMLValue is a clob
FROM XMLTempHolderTable tmp,
table (xmlsequence(extract((tmp.XMLValue),
'/ExceptionInformation'))) d--; --if the XMLValue is a XMLType
WHERE tmp.XMLID = ErrorID;
--Here's the deal. The second and third "FROM" is reading the value from
an intermediate table
--and it works
--the first FROM is trying to read the variable outright, and it fails
?
--For some reason, the code cannot refer to the cast clob (as xmltype)
directly
--but if one puts it into an intermediate table, and then read it, it
works?
COMMIT;
++++++++++++++++++++++++++=
DROP TABLE ERRORLOG
/
CREATE TABLE ERRORLOG (
/*
The below table definition maps to the information being provided by the
Microsoft.ApplicationBlocks.ExceptionManagement XMLPublisher
Here is a sample xml document.
<ExceptionInformation>
<AdditionalInformationProperty
ExceptionManager.MachineName="CelineDionXP1"
ExceptionManager.TimeStamp="11/8/2002 1:13:48 PM"
ExceptionManager.FullName="Microsoft.ApplicationBl ocks.ExceptionManagement,
Version=1.0.1769.18782, Culture=neutral, PublicKeyToken=null"
ExceptionManager.AppDomainName="ExceptionManagemen tQuickStartSamples.exe"
ExceptionManager.ThreadIdentity=""
ExceptionManager.WindowsIdentity="jean claude van damme" />
<Exception ExceptionType="System.DivideByZeroException"
Message="Attempted to divide by zero."
TargetSite="Void btnLogon_Click(System.Object, System.EventArgs)"
Source="ExceptionManagementQuickStartSamples">
<StackTrace> at
ExceptionManagementQuickStartSamples.Form1.btnLogo n_Click(Object sender,
EventArgs e) in c:\program files\microsoft application blocks for
..net\exception
management\code\cs\exceptionmanagementquickstartsa mples\form1.cs:line
171</StackTrace>
</Exception>
</ExceptionInformation>
*/
ErrorID int not null primary key ,
MachineName varchar2(128) null ,
TimeStampValue varchar2(64) null ,
FullName varchar2(128) null ,
AppDomainName varchar2(128) null ,
ThreadIdentity varchar2(128) null ,
WindowsIdentity varchar2(128) null ,
StackTrace varchar2(4000) null ,
ExceptionType varchar2(128) null ,
Message varchar2(640) not null ,
TargetSite varchar2(128) null ,
Source varchar2(128) null ,
EntryDateTime date default sysdate not null
)
/
DROP SEQUENCE SEQ_ErrorLog
/
CREATE SEQUENCE SEQ_ErrorLog
start with 1
increment by 1
nomaxvalue
/
DROP TABLE XMLTempHolderTable
/
CREATE TABLE XMLTempHolderTable (
/*
There is an issue reading a clob as an XMLTYPE directly
This is a temporary workaround, this table should never have more than 1
record in it
and is just a working table.
*/
XMLID int not null primary key ,
XMLValue XMLTYPE --CLOB
)
/
COMMIT
/
|||"sloan" <sloan@.ipass.net> a crit dans le message de
news:eadkgEeuHHA.3368@.TK2MSFTNGP02.phx.gbl...
> Oracle 10? No idea.
>
> Here is what I found.
>
> I now remember how much I hated Oracle XML.
> Good luck. I have no more I can offer beyond this.
Ok thanks

Handling of xml data within Oracle 9/10 and sql Server 2005

Hi there,
I would like to know the best way to handle Xml data stored on Oracle and
Sql Server 2005 using XQuery/XPath and AdoNet. Ideally, the C# code should
not be different for Oracle and Sql Server. Any hint ?
RegardsSql Server 2005/2000 has a OPENXML statement.
When I used Oracle 9 (while back), I was very disappointed in its xml
capabilities.
First question:
Are you pushing xml into the db, or reading it out?
It looks like reading it out, but wanted to make sure.
"Oriane" <oriane@.guermantes.fr> wrote in message
news:BD4ABE46-921B-4C70-92E4-F949528428AC@.microsoft.com...
> Hi there,
> I would like to know the best way to handle Xml data stored on Oracle and
> Sql Server 2005 using XQuery/XPath and AdoNet. Ideally, the C# code should
> not be different for Oracle and Sql Server. Any hint ?
> Regards|||Hi Sloan,
"sloan" <sloan@.ipass.net> a crit dans le message de
news:uZvmoiYuHHA.536@.TK2MSFTNGP06.phx.gbl...
> Sql Server 2005/2000 has a OPENXML statement.
> When I used Oracle 9 (while back), I was very disappointed in its xml
> capabilities.
And what about Oracle 10g ?
> First question:
> Are you pushing xml into the db, or reading it out?
Reading and writing, but mostly reading...
> It looks like reading it out, but wanted to make sure.|||Oracle 10? No idea.
Here is what I found.
I now remember how much I hated Oracle XML.
Good luck. I have no more I can offer beyond this.
/* Actual Logic of This procedure */
--this is just a check to make sure it can be cast as a XMLTYPE document
SELECT sys.xmltype.createxml(in_errorlogxml) INTO xmlvar FROM dual;
convertedBlobToXMLType := XMLTYPE(in_errorlogxml);
--
SELECT SEQ_ErrorLogID.NEXTVAL INTO ErrorID FROM DUAL;
--There is an issue with referring to the XML directly (as a cast clob
object)
--This is a workaround ... by putting the value into a temp database
--and then referring to that value, it will work.
delete from XMLTempHolderTable;
commit;
insert into XMLTempHolderTable values
(ErrorID,convertedBlobToXMLType);--in_errorlogxml);
commit;
INSERT INTO ERRORLOG (
ErrorID,
MachineName,
TimeStampValue,
FullName,
AppDomainName,
ThreadIdentity,
WindowsIdentity,
ExceptionType,
Message,
TargetSite,
Source,
StackTrace,
EntryDateTime
)
SELECT ErrorID,
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.MachineName'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.TimeStamp'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.FullName'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.AppDomainName'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.ThreadIdentity'),
extractValue(value(d),'ExceptionInformat
ion/AdditionalInformationProperty/@.E
xceptionManager.WindowsIdentity'),
extractValue(value(d),'/ExceptionInformation/Exception/@.ExceptionType'),
extractValue(value(d),'/ExceptionInformation/Exception/@.Message'),
extractValue(value(d),'/ExceptionInformation/Exception/@.TargetSite'),
extractValue(value(d),'/ExceptionInformation/Exception/@.Source'),
extractValue(value(d),'/ExceptionInformation/Exception/StackTrace'),
sysdate
--FROM table (xmlsequence(extract(XMLTYPE.createXML(in_errorlogxml),
'/ExceptionInformation'))) d; --Does not work
--FROM XMLTempHolderTable tmp,table
(xmlsequence(extract(xmltype(tmp.XMLValue), '/ExceptionInformation')))
; --if the XMLValue is a clob
FROM XMLTempHolderTable tmp,
table (xmlsequence(extract((tmp.XMLValue),
'/ExceptionInformation'))) d--; --if the XMLValue is a XMLType
WHERE tmp.XMLID = ErrorID;
--Here's the deal. The second and third "FROM" is reading the value from
an intermediate table
--and it works
--the first FROM is trying to read the variable outright, and it fails
'
--For some reason, the code cannot refer to the cast clob (as xmltype)
directly
--but if one puts it into an intermediate table, and then read it, it
works'
COMMIT;

++++++++++++++++++++++++++=
DROP TABLE ERRORLOG
/
CREATE TABLE ERRORLOG (
/*
The below table definition maps to the information being provided by the
Microsoft.ApplicationBlocks.ExceptionManagement XMLPublisher
Here is a sample xml document.
<ExceptionInformation>
<AdditionalInformationProperty
ExceptionManager.MachineName="CelineDionXP1"
ExceptionManager.TimeStamp="11/8/2002 1:13:48 PM"
ExceptionManager.FullName="Microsoft.ApplicationBlocks.ExceptionManagement,
Version=1.0.1769.18782, Culture=neutral, PublicKeyToken=null"
ExceptionManager.AppDomainName="ExceptionManagementQuickStartSamples.exe"
ExceptionManager.ThreadIdentity=""
ExceptionManager.WindowsIdentity="jean claude van damme" />
<Exception ExceptionType="System.DivideByZeroException"
Message="Attempted to divide by zero."
TargetSite="Void btnLogon_Click(System.Object, System.EventArgs)"
Source="ExceptionManagementQuickStartSamples">
<StackTrace> at
ExceptionManagementQuickStartSamples.Form1.btnLogon_Click(Object sender,
EventArgs e) in c:\program files\microsoft application blocks for
.net\exception
management\code\cs\exceptionmanagementqu
ickstartsamples\form1.cs:line
171</StackTrace>
</Exception>
</ExceptionInformation>
*/
ErrorID int not null primary key ,
MachineName varchar2(128) null ,
TimeStampValue varchar2(64) null ,
FullName varchar2(128) null ,
AppDomainName varchar2(128) null ,
ThreadIdentity varchar2(128) null ,
WindowsIdentity varchar2(128) null ,
StackTrace varchar2(4000) null ,
ExceptionType varchar2(128) null ,
Message varchar2(640) not null ,
TargetSite varchar2(128) null ,
Source varchar2(128) null ,
EntryDateTime date default sysdate not null
)
/
DROP SEQUENCE SEQ_ErrorLog
/
CREATE SEQUENCE SEQ_ErrorLog
start with 1
increment by 1
nomaxvalue
/
DROP TABLE XMLTempHolderTable
/
CREATE TABLE XMLTempHolderTable (
/*
There is an issue reading a clob as an XMLTYPE directly
This is a temporary workaround, this table should never have more than 1
record in it
and is just a working table.
*/
XMLID int not null primary key ,
XMLValue XMLTYPE --CLOB
)
/
COMMIT
/|||"sloan" <sloan@.ipass.net> a crit dans le message de
news:eadkgEeuHHA.3368@.TK2MSFTNGP02.phx.gbl...
> Oracle 10? No idea.
>
> Here is what I found.
>
> I now remember how much I hated Oracle XML.
> Good luck. I have no more I can offer beyond this.
Ok thanks

handling nulls with inserts

Hi,

I've got a program that has a form. ?On the form there are several fields which are optional that I would prefer to remain as NULL in the database (rather than a default value) if the user doesn't fill them out. ?I'm using stored procedures for my inserts (sql 2000 + C#). ?

How do most people handle these situations? ?Right now I have a seperate function which receives the parameter values as params and then handles creating the parameters and executing the stored procedure. ?I'm not sure how to handle passing that I want one of those to be null, at least not without having multiple functions and possibly various stored procedures for different possibilities.

Thanks.

I would recommend not using stored procedures if this is what you want to accomplish. Like you said you would have to create multiple stored procedures for each variation. Instead create the insert in your code. I use a string builder to build an SQL statement. Within the logic of your system you can put if then statements to determine what the user inputed and therefore how to build the SQL and correlating params.

Have you tried to pass the value DBNull.Value to the stored procedure? Not sure that this would work, but it might be worth a try if you want to keep your stored procedures.

Hope that helps.

Handling nulls in matrix sub totals

I have a matrix where I am using the inscope function to return different
calculations depending on which group it is in (ie the detail is just the
value and the sub totals are sum(values)).
In for the subtotal group I have used:
iif(isnothing(sum(Fields!CurrentCount.Value)), 0,
sum(Fields!CurrentCount.Value)
But this is still returning blanks where there are no values in the group to
sum together.
Can anyone suggest how to get around this issue as it ?
TIA
Andrewyou might try iif(sum(Fields!CurrentCount.Value = nothing)
"Andrew Murphy" <AndrewMurphy@.discussions.microsoft.com> wrote in message
news:56B0AB62-6CDE-4F41-BF39-AC72FABAC2C0@.microsoft.com...
>I have a matrix where I am using the inscope function to return different
> calculations depending on which group it is in (ie the detail is just the
> value and the sub totals are sum(values)).
> In for the subtotal group I have used:
> iif(isnothing(sum(Fields!CurrentCount.Value)), 0,
> sum(Fields!CurrentCount.Value)
> But this is still returning blanks where there are no values in the group
> to
> sum together.
> Can anyone suggest how to get around this issue as it ?
> TIA
> Andrew
>|||also if this formula you put on here looks just like the one you have in the
expression then you need to add a perenthesis onto the end. I tried to
recreate your problem and I get a 0. Weird
"Andrew Murphy" <AndrewMurphy@.discussions.microsoft.com> wrote in message
news:56B0AB62-6CDE-4F41-BF39-AC72FABAC2C0@.microsoft.com...
>I have a matrix where I am using the inscope function to return different
> calculations depending on which group it is in (ie the detail is just the
> value and the sub totals are sum(values)).
> In for the subtotal group I have used:
> iif(isnothing(sum(Fields!CurrentCount.Value)), 0,
> sum(Fields!CurrentCount.Value)
> But this is still returning blanks where there are no values in the group
> to
> sum together.
> Can anyone suggest how to get around this issue as it ?
> TIA
> Andrew
>|||and aslo maybe you should try to set the null value to 0 before you sum it.
"Andrew Murphy" <AndrewMurphy@.discussions.microsoft.com> wrote in message
news:56B0AB62-6CDE-4F41-BF39-AC72FABAC2C0@.microsoft.com...
>I have a matrix where I am using the inscope function to return different
> calculations depending on which group it is in (ie the detail is just the
> value and the sub totals are sum(values)).
> In for the subtotal group I have used:
> iif(isnothing(sum(Fields!CurrentCount.Value)), 0,
> sum(Fields!CurrentCount.Value)
> But this is still returning blanks where there are no values in the group
> to
> sum together.
> Can anyone suggest how to get around this issue as it ?
> TIA
> Andrew
>