Showing posts with label tables. Show all posts
Showing posts with label tables. Show all posts

Friday, March 30, 2012

Having trouble distilling something down to a single query

Ok, lets say I have 2 tables, A & B.

Lets say table a has the key A.record_id, and the field emp.

Say table b has the key B.record_id, a foreign key B.a_id that links it to table A, and the field B.date. Now, I want to join these tables as so:

SELECT
A."RECORD_ID", A."EMP",
B."RECORD_ID", B."DATE"
FROM
{ oj "DBA"."A" TABLE_A LEFT OUTER JOIN "DBA"."B" TABLE B ON
A."RECORD_ID" = B."A_ID"}

You see, I want a list of all A.record_id, whether or not I get a return from the B table.

The problem arises when I want to limit the dates via B.date. It's clear to me what the problem is here, I just don't know a way around it.

WHERE
(B."DATE" IS NULL OR
(B."DATE" >= {d '2004-01-01'} AND
B."DATE" <= {d '2004-01-31'}))

So basically, now I'm not getting any a.record_id's for a's that are linked to a b that fall outside of that date range.

Summing up I want...

All A + B where there is a B.date in that range
No A+B for results that are not within the entered date range.
All A's, regardless of if there is a linked B.
All A's, even if there are linked B's outside of the date range.
All in 1 statement (due to environment limitations).

Thanks for your help. I'm pretty much self taught here, so I apologize for not having the language knowledge to make this question more concise. Of course if I knew better how to explain what I'm trying to do then I'd probably know how to do it. ;-)

Mock Sample Data

table A
A001 bill
A002 bill
A003 bill
A004 frank
A005 frank
A006 bob

table B
B001 A001 1/1/2004
B002 A001 1/15/2004
B003 A001 4/1/2004
B004 A003 5/1/2004
B005 A004 1/1/2004
B006 A005 3/3/2004

Mock Results

A001 bill B001 1/1/2004
A001 bill B002 1/15/2004
A002 bill NULL NULL
A003 bill NULL NULL
A004 frank B004 1/1/2004
A005 frank NULL NULL
A006 bob NULL NULL


edit: added mock data/resultsI'd use:(B."DATE" IS NULL OR
(B."DATE" >= {d '2004-01-01'} AND
B."DATE" <= {d '2004-01-31'}))-PatP|||Sorry, that's how it's in there now, editing above to reflect.

Not the problem.|||What version of which database engine are you using?

-PatP|||Using Sybase 9.|||I just noticed some apparent inconsistancies in your query. Could you post the entire query as you are submitting it so that I can try it? I'm using version 8, but I'd expect that to be close enough.

-PatP|||SELECT
A."RECORD_ID", A."EMP",
B."RECORD_ID", B."DATE"
FROM
{ oj "DBA"."A" A LEFT OUTER JOIN "DBA"."B" B ON
A."RECORD_ID" = B."A_ID"}
WHERE
(B."DATE" IS NULL OR
(B."DATE" >= {d '2004-01-01'} AND
B."DATE" <= {d '2004-01-31'}))

Mock Current Results From Earlier Mock Data

A001 bill B001 1/1/2004
A001 bill B002 1/15/2004
A002 bill NULL NULL
A004 frank B004 1/1/2004
A005 frank NULL NULL

Thanks for looking at this.|||Give up? ;-)|||Try a LEFT OUTER JOIN to a Subquery that restricts your table B.

SELECT
A."RECORD_ID", A."EMP",
B."RECORD_ID", B."DATE"
FROM
{ oj "DBA"."A" TABLE_A LEFT OUTER JOIN
(Select *
FROM "DBA"."B" TABLE B where ((B.DAte <='1/31/2004' and B.Date >='1/01/2004') OR B.DAte is NULL)) as S1
on A."RECORD_ID" = S1."A_ID"}

I may have a typo with the brackets up there, but something like that should work.

The key is that you are creating a subquery with results restricted to your data range, and then naming that subquery S1. Then the results of S1 are joined to table A.|||1. All A + B where there is a B.date in that range
2. No A+B for results that are not within the entered date range.
3. All A's, regardless of if there is a linked B.
4. All A's, even if there are linked B's outside of the date range.
5. All in 1 statement (due to environment limitations).
Unless I am missing something, - there are contradicting conditions in your requirements:

If #1 is to be met Then #3 & #4 cannot be
If #2 is to be met Then #4 cannot be
If #3 is to be met Then B.date is NULL, thus #1 cannot be
If #4 is to be met...see above

Can you clarify?

Wednesday, March 28, 2012

Having problems importing Data from 2 tables into 1. Please help.

Hi
I am new to SQL and am having problems importing data from two tables to one.
I have 3 tables : Franchise, Club and Reference.
Franchise has FranchiseID as Primary key and FranchiseNumber as unique.
Club has ClubID as Primary key.
Reference has a combination of ClubID and FranchiseNumber as primary key.
I am not able to insert data into table Reference from Franchise and Club.
Any help or suggestions would be highly appreciated. Thanks in advance.

Try the link below for INSERT INTO with Column list which gives you control of the order of INSERT. Hope this helps.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_ia-iz_5cl0.asp|||Thnx very much !!

Monday, March 26, 2012

HAVING clause is a case statement??

i have wrote a query which compares two tables, returning anywhere the qty is not the same in each table:

(simple ex)

Select order_no
from table1
inner join table2
on table1.order_no = table2.order_no
group by order_no
having table1.Qty<> table2.Qty

BUT... I need to add a table3, where there maybe (or may not be enters - thus a left join). If there is an entry in table3 then use qty in table3 and not table1... so having becomes:

CASE WHEN table1.Qty<> table3.Qty
THEN table3.Qty<> table2.Qty
ELSE table1.Qty<> table2.Qty END

but how do i actually write this?perhaps if you would care to explain what you're doing?

are you comparing individual Qty values, or the SUMs?

because the HAVING clause may refer only to columns in the GROUP BY or to aggregate functions|||perhaps if you would care to explain what you're doing?

are you comparing individual Qty values, or the SUMs?

because the HAVING clause may refer only to columns in the GROUP BY or to aggregate functions

Sorry I am trying to compare Sum(qty) for each product in an order (product maybe in the order more than 1ce) I am trying to retrieve any product lines where Sum qties in table1 and table2 are not the same.

However, if stock was not found, then an allocated qty is recorded in table 3...so in this case I want to compare qtyies in table3 and table2
?|||select t1.order_no
, t1.sumqty
, t2.order_no
, t2.sumqty
from (
select order_no
, sum(qty) as sumqty
from table1
group
by order_no
) as t1
full outer
join (
select order_no
, sum(qty) as sumqty
from table2
group
by order_no
) as t2
on t2.order_no = t1.order_no
and t2.sumqty <> t1.sumqtythat's the general strategy -- do your sums in derived tables

for table 3, you're on your own :)

HAVING Clause has no effect

I have this stored procedure. I want to run a few simple SQL functions against my tables. In particular I want to take a subset of records (One or Two years worth) and calculate AVG, VAR and STDEV.

It does not work the way I thought it would. I end up with the whole input table in #tempor1 which is about 6 years worth of records.

set ANSI_NULLS ON
set QUOTED_IDENTIFIER OFF

GO
ALTER PROCEDURE [dbo].[findAve1YearDailyClose_MSFT]
AS
BEGIN
SET NOCOUNT ON;
SELECT adjClosed, volume INTO #tempor1 FROM dbo.dailyCl_MSFT
GROUP BY dateTimed, adjClosed, volume
HAVING (dateTimed > DATEADD (year, -1, MAX (dateTimed)))

SELECT AVG (adjClosed) AS "AVGAdjClose1Year",
VAR (adjClosed) AS "VARAdjClose1Year", AVG (volume) AS "AVGVolume1Year",
STDEV (volume) AS "STDEVVolume1Year", COUNT (*) AS "total"
FROM #tempor1
END

Thus if I change the number of years I subtract from the latest date from 1 to 2 I end up with the same result. What is the problem?

Thanks.

What about using:

SELECT adjClosed, volume INTO #tempor1
FROM dbo.dailyCl_MSFT
WHERE dateTimed > (SELECT DATEADD(year, -1, MAX (dateTimed)) FROM dbo.dailyCl)
GROUP BY dateTimed, adjClosed, volume


HTH, Jens K. Suessmeyer.


http://www.sqlserver2005.de

|||

Jens K. Suessmeyer wrote:

What about using:

SELECT adjClosed, volume INTO #tempor1
FROM dbo.dailyCl_MSFT
WHERE dateTimed > (SELECT DATEADD(year, -1, MAX (dateTimed)) FROM dbo.dailyCl)
GROUP BY dateTimed, adjClosed, volume

HTH, Jens K. Suessmeyer.


http://www.sqlserver2005.de

It sure worked! Many thanks for a lesson. Marked as answered!

Thanks.

sql

Having a stored procedure copy tables & also preserve indexing/sch

Hello,
I created a stored procedure that renames a table to OLD_xxxxx and replaces
that table with another (copy) that resides on a different database. I pull
the
tablename names through the use of a cursor table and construct a SELECT INTO
statement as follows
'SELECT * INTO DB1.dbo.' + @.tableName + ' FROM DB2.dbo.' + @.tableName
It works great especially since there are 80+ tables that need to be copied
from one database to another. The drawback is that it doesn't preserve the
indexing/foriegn key constraints. Is there a way to do this without having to
deal with DTS or creating additional scripts? Ideally I would like to
replace
the "SELECT * INTO" statement with something that not only does a copy but
also preserves the indexing! Does such a command exist? Any help from the
Microsoft guru's would be greatly appreciated!!!!
SELECT ... INTO <tablename> doesn't create any of the PRIMARY KEY, UNIQUE,
FOREIGN KEY, CHECK, NOT NULL constraints and doesn't define DEFAULT and
IDENTITY column properties for the new table.
You will have to write seperate statements into your stored procedure to
create them.
--Vishal.
"Peter S." wrote:

> Hello,
> I created a stored procedure that renames a table to OLD_xxxxx and replaces
> that table with another (copy) that resides on a different database. I pull
> the
> tablename names through the use of a cursor table and construct a SELECT INTO
> statement as follows
> 'SELECT * INTO DB1.dbo.' + @.tableName + ' FROM DB2.dbo.' + @.tableName
> It works great especially since there are 80+ tables that need to be copied
> from one database to another. The drawback is that it doesn't preserve the
> indexing/foriegn key constraints. Is there a way to do this without having to
> deal with DTS or creating additional scripts? Ideally I would like to
> replace
> the "SELECT * INTO" statement with something that not only does a copy but
> also preserves the indexing! Does such a command exist? Any help from the
> Microsoft guru's would be greatly appreciated!!!!
>

Friday, March 23, 2012

Have Suggestions for a Front End Tool?

I am currently using Microsoft Excel pivot tables to access data cubes from SQL Server 2002 as a front end tool for users to access sales data. I have run into issues when I am asking the pivot to filter too much. Does anyone have any suggestions on another frontend tool for cubes that does not run into limitations like Excel does? A 3rd party tool? I know Access could be used but I would like to purchase something not write a program?

Thank you,

Carla C. HertelCheck out the following link:

link (http://www.hungrydogsoftware.com/)|||Here is a bigger list:

list (http://www.dwinfocenter.org/olap.html)

But the hungry dog intellibrowser is pretty good.|||Have you checked out Microsoft's pivot table Office Web Component? If you haven't it might be worth a look.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/odeopg/html/deovrworkingwithofficewebcomponents.asp

Good luck|||Panorama Novaview http://www.pansw.com/ OLAP front-end is made by the same company that licensed Microsoft the OLAP server technology for Analysis Services. It has write-back capabilities unlike Bus Obj, Cognos, Crystal et al and some nice features like bubble-up exceptions, wizards for query creation and good thin-client support.

SQLmag published a review of the leading OLAP front-ends for Microsoft Analysis Services:

http://www.sqlmag.com/Articles/Index.cfm?ArticleID=26486

There are many tools on the market, but this is the short list.|||i can recommend BusinessObjects - although some dum people (like our MIS dept) use universes only, BusinessObjects can work with MS Analytical Servicers just fine. Business Objects is more reporting tool. They have also special add-in into Excel called BusinessQuery.

microsoft has data analyzer (it's 100% analytical tool) http://www.microsoft.com/office/dataanalyzer/default.asp

jiri|||Hi,

as a client tool we use Q4bisAnalysis.
A quit nice little tool which could be learned within minutes.

More infos:
www.q4bis.com

regards

Michael

Have Insert statement, need equivalent Update.

Using ms sql 2000
I have 2 tables.
I have a table which has information regarding a computer scan. Each
record in this table has a column called MAC which is the unique ID for
each Scan. The table in question holds the various scan results of
every scan from different computers. I have an insert statement that
works however I am having troulbe getting and update statement out of
it, not sure if I'm using the correct method to insert and thats why or
if I'm just missing something. Anyway the scan results is stored as an
XML document(@.iTree) so I have a temp table that holds the relevent
info from that. Here is my Insert statement for the temporary table.
INSERT INTO #temp
SELECT * FROM openxml(@.iTree,
'ComputerScan/scans/scan/scanattributes/scanattribute', 1)
WITH(
ID nvarchar(50) './@.ID',
ParentID nvarchar(50) './@.ParentID',
Name nvarchar(50) './@.Name',
scanattribute nvarchar(50) '.'
)
Now here is the insert statement for the table I am having trouble
with.
INSERT INTO tblScanDetail (MAC, GUIID, GUIParentID, ScanAttributeID,
ScanID, AttributeValue, DateCreated, LastModified)
SELECT @.MAC, #temp.ID, #temp.ParentID,
tblScanAttribute.ScanAttributeID, tblScan.ScanID,
#temp.scanattribute, DateCreated = getdate(), LastModified =
getdate()
FROM tblScan, tblScanAttribute JOIN #temp ON tblScanAttribute.Name =
#temp.Name
If there is a way to do this without the temporary table that would be
great, but I haven't figured a way around it yet, if anyone has any
ideas that would be great, thanks.Because your procedure don't use sp_executeSql you can use a table variable,
not a temp table.
Declare @.Tab table
(
Field1 nvarchar(10),
Field2 int,
...(exactly the fields in the xml file)
)
INSERT INTO tblScanDetail (MAC, GUIID, GUIParentID, ScanAttributeID,
ScanID, AttributeValue, DateCreated, LastModified)
SELECT @.MAC, @.Tab.ID, @.TabParentID,
tblScanAttribute.ScanAttributeID, tblScan.ScanID,
@.Tab.scanattribute, getdate(), getdate()
FROM tblScan
INNER JOIN tblScanAttribute
JOIN@.Tab ON tblScanAttribute.Name =
@.Tab.Name
Of course fields must match...
Hope it helps
Benga.
"rhaazy" <rhaazy@.gmail.com> wrote in message
news:1151351218.116752.197980@.m73g2000cwd.googlegroups.com...
> Using ms sql 2000
> I have 2 tables.
> I have a table which has information regarding a computer scan. Each
> record in this table has a column called MAC which is the unique ID for
> each Scan. The table in question holds the various scan results of
> every scan from different computers. I have an insert statement that
> works however I am having troulbe getting and update statement out of
> it, not sure if I'm using the correct method to insert and thats why or
> if I'm just missing something. Anyway the scan results is stored as an
> XML document(@.iTree) so I have a temp table that holds the relevent
> info from that. Here is my Insert statement for the temporary table.
> INSERT INTO #temp
> SELECT * FROM openxml(@.iTree,
> 'ComputerScan/scans/scan/scanattributes/scanattribute', 1)
> WITH(
> ID nvarchar(50) './@.ID',
> ParentID nvarchar(50) './@.ParentID',
> Name nvarchar(50) './@.Name',
> scanattribute nvarchar(50) '.'
> )
>
> Now here is the insert statement for the table I am having trouble
> with.
> INSERT INTO tblScanDetail (MAC, GUIID, GUIParentID, ScanAttributeID,
> ScanID, AttributeValue, DateCreated, LastModified)
> SELECT @.MAC, #temp.ID, #temp.ParentID,
> tblScanAttribute.ScanAttributeID, tblScan.ScanID,
> #temp.scanattribute, DateCreated = getdate(), LastModified =
> getdate()
> FROM tblScan, tblScanAttribute JOIN #temp ON tblScanAttribute.Name =
> #temp.Name
> If there is a way to do this without the temporary table that would be
> great, but I haven't figured a way around it yet, if anyone has any
> ideas that would be great, thanks.
>|||While this is good to know my real problem is that I need the statement
that will do what my insert does accept I need it to be an update
statement. I need the update because an insert is only going to happen
once for each client.
Benga wrote:
> Because your procedure don't use sp_executeSql you can use a table variabl
e,
> not a temp table.
> Declare @.Tab table
> (
> Field1 nvarchar(10),
> Field2 int,
> ...(exactly the fields in the xml file)
> )
> INSERT INTO tblScanDetail (MAC, GUIID, GUIParentID, ScanAttributeID,
> ScanID, AttributeValue, DateCreated, LastModified)
> SELECT @.MAC, @.Tab.ID, @.TabParentID,
> tblScanAttribute.ScanAttributeID, tblScan.ScanID,
> @.Tab.scanattribute, getdate(), getdate()
> FROM tblScan
> INNER JOIN tblScanAttribute
> JOIN@.Tab ON tblScanAttribute.Name =
> @.Tab.Name
> Of course fields must match...
> Hope it helps
> Benga.
> "rhaazy" <rhaazy@.gmail.com> wrote in message
> news:1151351218.116752.197980@.m73g2000cwd.googlegroups.com...|||Fixed it, no problems.
rhaazy wrote:
> While this is good to know my real problem is that I need the statement
> that will do what my insert does accept I need it to be an update
> statement. I need the update because an insert is only going to happen
> once for each client.
> Benga wrote:

Have Insert statement, need equivalent Update.

Using ms sql 2000
I have 2 tables.
I have a table which has information regarding a computer scan. Each
record in this table has a column called MAC which is the unique ID for

each Scan. The table in question holds the various scan results of
every scan from different computers. I have an insert statement that
works however I am having troulbe getting and update statement out of
it, not sure if I'm using the correct method to insert and thats why or

if I'm just missing something. Anyway the scan results is stored as an

XML document(@.iTree) so I have a temp table that holds the relevent
info from that. Here is my Insert statement for the temporary table.

INSERT INTO #temp
SELECT * FROM openxml(@.iTree,
'ComputerScan/scans/scan/scanattributes/scanattribute', 1)
WITH(
ID nvarchar(50) './@.ID',
ParentID nvarchar(50) './@.ParentID',
Name nvarchar(50) './@.Name',
scanattribute nvarchar(50) '.'
)

Now here is the insert statement for the table I am having trouble
with.

INSERT INTO tblScanDetail (MAC, GUIID, GUIParentID, ScanAttributeID,
ScanID, AttributeValue, DateCreated, LastModified)
SELECT @.MAC, #temp.ID, #temp.ParentID,
tblScanAttribute.ScanAttributeID, tblScan.ScanID,
#temp.scanattribute, DateCreated = getdate(),
LastModified =
getdate()
FROM tblScan, tblScanAttribute JOIN #temp ON
tblScanAttribute.Name =
#temp.Name

If there is a way to do this without the temporary table that would be
great, but I haven't figured a way around it yet, if anyone has any
ideas that would be great, thanks.rhaazy (rhaazy@.gmail.com) writes:
> INSERT INTO #temp
> SELECT * FROM openxml(@.iTree,
> 'ComputerScan/scans/scan/scanattributes/scanattribute', 1)
> WITH(
> ID nvarchar(50) './@.ID',
> ParentID nvarchar(50) './@.ParentID',
> Name nvarchar(50) './@.Name',
> scanattribute nvarchar(50) '.'
> )
> Now here is the insert statement for the table I am having trouble
> with.
> INSERT INTO tblScanDetail (MAC, GUIID, GUIParentID, ScanAttributeID,
> ScanID, AttributeValue, DateCreated, LastModified)
> SELECT @.MAC, #temp.ID, #temp.ParentID,
> tblScanAttribute.ScanAttributeID, tblScan.ScanID,
> #temp.scanattribute, DateCreated = getdate(),
> LastModified =
> getdate()
> FROM tblScan, tblScanAttribute JOIN #temp ON
> tblScanAttribute.Name =
> #temp.Name
> If there is a way to do this without the temporary table that would be
> great, but I haven't figured a way around it yet, if anyone has any
> ideas that would be great, thanks.

I have some difficulties to understand what your problem is. If all
you want to do is to insert from the XML document, then you don't
need the temp table, but you could use OPENXML directly in the
query.

But then you talk about an UPDATE as well, and if your aim is to insert
new rows, and update existing, it's probably better to use a temp
table (or a table variable), so that you don't have to run OPENXML twice.
Some DB engines support a MERGE command which performs the task of
UPDATE and INSERT in one statement, but this is not available in
SQL Server, not even in SQL 2005.

If this did not answer your question, could you please clarify?

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||My app runs on all my companies PCs every month a scan is performed and
the resulst are stored in a database. So the first time a scan is
performed for any PC it will be an insert, but after that it will
always be an update. I tried using openxml in my insert statement but
kept getting an error stating my sub query is returning more than one
result... So since I couldn't do it that way I'm trying this method.
All the relevent openxml is there I just couldn't figure out how to
insert each column using it. If you have any suggestions I'm open to
give it a try.

Erland Sommarskog wrote:
> rhaazy (rhaazy@.gmail.com) writes:
> > INSERT INTO #temp
> > SELECT * FROM openxml(@.iTree,
> > 'ComputerScan/scans/scan/scanattributes/scanattribute', 1)
> > WITH(
> > ID nvarchar(50) './@.ID',
> > ParentID nvarchar(50) './@.ParentID',
> > Name nvarchar(50) './@.Name',
> > scanattribute nvarchar(50) '.'
> > )
> > Now here is the insert statement for the table I am having trouble
> > with.
> > INSERT INTO tblScanDetail (MAC, GUIID, GUIParentID, ScanAttributeID,
> > ScanID, AttributeValue, DateCreated, LastModified)
> > SELECT @.MAC, #temp.ID, #temp.ParentID,
> > tblScanAttribute.ScanAttributeID, tblScan.ScanID,
> > #temp.scanattribute, DateCreated = getdate(),
> > LastModified =
> > getdate()
> > FROM tblScan, tblScanAttribute JOIN #temp ON
> > tblScanAttribute.Name =
> > #temp.Name
> > If there is a way to do this without the temporary table that would be
> > great, but I haven't figured a way around it yet, if anyone has any
> > ideas that would be great, thanks.
> I have some difficulties to understand what your problem is. If all
> you want to do is to insert from the XML document, then you don't
> need the temp table, but you could use OPENXML directly in the
> query.
> But then you talk about an UPDATE as well, and if your aim is to insert
> new rows, and update existing, it's probably better to use a temp
> table (or a table variable), so that you don't have to run OPENXML twice.
> Some DB engines support a MERGE command which performs the task of
> UPDATE and INSERT in one statement, but this is not available in
> SQL Server, not even in SQL 2005.
> If this did not answer your question, could you please clarify?
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx|||Fixed it no problems.
rhaazy wrote:
> My app runs on all my companies PCs every month a scan is performed and
> the resulst are stored in a database. So the first time a scan is
> performed for any PC it will be an insert, but after that it will
> always be an update. I tried using openxml in my insert statement but
> kept getting an error stating my sub query is returning more than one
> result... So since I couldn't do it that way I'm trying this method.
> All the relevent openxml is there I just couldn't figure out how to
> insert each column using it. If you have any suggestions I'm open to
> give it a try.
> Erland Sommarskog wrote:
> > rhaazy (rhaazy@.gmail.com) writes:
> > > INSERT INTO #temp
> > > SELECT * FROM openxml(@.iTree,
> > > 'ComputerScan/scans/scan/scanattributes/scanattribute', 1)
> > > WITH(
> > > ID nvarchar(50) './@.ID',
> > > ParentID nvarchar(50) './@.ParentID',
> > > Name nvarchar(50) './@.Name',
> > > scanattribute nvarchar(50) '.'
> > > )
> > > > Now here is the insert statement for the table I am having trouble
> > > with.
> > > > INSERT INTO tblScanDetail (MAC, GUIID, GUIParentID, ScanAttributeID,
> > > ScanID, AttributeValue, DateCreated, LastModified)
> > > SELECT @.MAC, #temp.ID, #temp.ParentID,
> > > tblScanAttribute.ScanAttributeID, tblScan.ScanID,
> > > #temp.scanattribute, DateCreated = getdate(),
> > > LastModified =
> > > getdate()
> > > FROM tblScan, tblScanAttribute JOIN #temp ON
> > > tblScanAttribute.Name =
> > > #temp.Name
> > > > If there is a way to do this without the temporary table that would be
> > > great, but I haven't figured a way around it yet, if anyone has any
> > > ideas that would be great, thanks.
> > I have some difficulties to understand what your problem is. If all
> > you want to do is to insert from the XML document, then you don't
> > need the temp table, but you could use OPENXML directly in the
> > query.
> > But then you talk about an UPDATE as well, and if your aim is to insert
> > new rows, and update existing, it's probably better to use a temp
> > table (or a table variable), so that you don't have to run OPENXML twice.
> > Some DB engines support a MERGE command which performs the task of
> > UPDATE and INSERT in one statement, but this is not available in
> > SQL Server, not even in SQL 2005.
> > If this did not answer your question, could you please clarify?
> > --
> > Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> > Books Online for SQL Server 2005 at
> > http://www.microsoft.com/technet/pr...oads/books.mspx
> > Books Online for SQL Server 2000 at
> > http://www.microsoft.com/sql/prodin...ions/books.mspxsql

Monday, March 19, 2012

Hardware Tuning for Queries

Hi,
I have several queries in a complicated database that join almost 10 tables
each. The query plans are good because there are a lot of index seeks and
merge/loop joins (Hash join have been eliminated by appropriate indexes). It
seems it doesn't lack any index. But as an example, a particular query that
joins 12 tables and returns 136,000 rows almost takes about 40 seconds in
best time (sometimes 100 seconds). I believe that index/query tuning cannot
make this query better, only hardware upgrade can help. I'd like to know
upgrade to which hardware components can help such queries?
Some information that might help:
CPU: 2
Memory: 1GB
Disk: 1
Cache Hit Ratio: above 99%
Concurrent Connections: 40-50
It is mentionable that there is no bottleneck while monitoring with
Performance Monitor in any hardware component like CPU, Disk, Memory. Also
this query doesn't have complex where condition, but it has a lot of CASE in
columns.
Any help would be greatly appreciated,
LeilaI cant really speak to the hard ware side of things but as a practice for
large joins like this I like to use Indexed Views and query the views
intead.
Read this article for some insight...
http://www.microsoft.com/technet/prodtechnol/sql/2005/ipsql05iv.mspx
thanks,
--
/*
Warren Brunk - MCITP - SQL 2005, MCDBA
www.techintsolutions.com
*/
"Leila" <Leilas@.hotpop.com> wrote in message
news:OOuN1Q%235GHA.3292@.TK2MSFTNGP02.phx.gbl...
> Hi,
> I have several queries in a complicated database that join almost 10
> tables each. The query plans are good because there are a lot of index
> seeks and merge/loop joins (Hash join have been eliminated by appropriate
> indexes). It seems it doesn't lack any index. But as an example, a
> particular query that joins 12 tables and returns 136,000 rows almost
> takes about 40 seconds in best time (sometimes 100 seconds). I believe
> that index/query tuning cannot make this query better, only hardware
> upgrade can help. I'd like to know upgrade to which hardware components
> can help such queries?
> Some information that might help:
> CPU: 2
> Memory: 1GB
> Disk: 1
> Cache Hit Ratio: above 99%
> Concurrent Connections: 40-50
> It is mentionable that there is no bottleneck while monitoring with
> Performance Monitor in any hardware component like CPU, Disk, Memory. Also
> this query doesn't have complex where condition, but it has a lot of CASE
> in columns.
> Any help would be greatly appreciated,
> Leila
>|||Indexed views might not be the answer. As I understand it, the benefits
that indexed views (and partitioned tables) provide are only available
within the Developer and Enterprise Editions of SQL Server. It is possible
to create an indexed view within other versions of SQL Server, but the query
optimizer cannot use the index.
If I am mistaken please correct me!
--
Keith Kratochvil
"Warren Brunk" <wbrunk@.techintsolutions.com> wrote in message
news:uo5zHd%235GHA.4116@.TK2MSFTNGP03.phx.gbl...
>I cant really speak to the hard ware side of things but as a practice for
>large joins like this I like to use Indexed Views and query the views
>intead.
> Read this article for some insight...
> http://www.microsoft.com/technet/prodtechnol/sql/2005/ipsql05iv.mspx
> thanks,
> --
> /*
> Warren Brunk - MCITP - SQL 2005, MCDBA
> www.techintsolutions.com
> */
>
> "Leila" <Leilas@.hotpop.com> wrote in message
> news:OOuN1Q%235GHA.3292@.TK2MSFTNGP02.phx.gbl...
>> Hi,
>> I have several queries in a complicated database that join almost 10
>> tables each. The query plans are good because there are a lot of index
>> seeks and merge/loop joins (Hash join have been eliminated by appropriate
>> indexes). It seems it doesn't lack any index. But as an example, a
>> particular query that joins 12 tables and returns 136,000 rows almost
>> takes about 40 seconds in best time (sometimes 100 seconds). I believe
>> that index/query tuning cannot make this query better, only hardware
>> upgrade can help. I'd like to know upgrade to which hardware components
>> can help such queries?
>> Some information that might help:
>> CPU: 2
>> Memory: 1GB
>> Disk: 1
>> Cache Hit Ratio: above 99%
>> Concurrent Connections: 40-50
>> It is mentionable that there is no bottleneck while monitoring with
>> Performance Monitor in any hardware component like CPU, Disk, Memory.
>> Also this query doesn't have complex where condition, but it has a lot of
>> CASE in columns.
>> Any help would be greatly appreciated,
>> Leila
>|||The engine considers the index on the view automatically for Ent/Dev
edition. For other editions, you need NoExpand hint to get the indexed view
to be considered.
e.g.
select *
from indexedview with (noexpand)
--
-oj
"Keith Kratochvil" <sqlguy.back2u@.comcast.net> wrote in message
news:%23ZPfWY$5GHA.2464@.TK2MSFTNGP06.phx.gbl...
> Indexed views might not be the answer. As I understand it, the benefits
> that indexed views (and partitioned tables) provide are only available
> within the Developer and Enterprise Editions of SQL Server. It is
> possible to create an indexed view within other versions of SQL Server,
> but the query optimizer cannot use the index.
> If I am mistaken please correct me!
> --
> Keith Kratochvil
>
> "Warren Brunk" <wbrunk@.techintsolutions.com> wrote in message
> news:uo5zHd%235GHA.4116@.TK2MSFTNGP03.phx.gbl...
>>I cant really speak to the hard ware side of things but as a practice for
>>large joins like this I like to use Indexed Views and query the views
>>intead.
>> Read this article for some insight...
>> http://www.microsoft.com/technet/prodtechnol/sql/2005/ipsql05iv.mspx
>> thanks,
>> --
>> /*
>> Warren Brunk - MCITP - SQL 2005, MCDBA
>> www.techintsolutions.com
>> */
>>
>> "Leila" <Leilas@.hotpop.com> wrote in message
>> news:OOuN1Q%235GHA.3292@.TK2MSFTNGP02.phx.gbl...
>> Hi,
>> I have several queries in a complicated database that join almost 10
>> tables each. The query plans are good because there are a lot of index
>> seeks and merge/loop joins (Hash join have been eliminated by
>> appropriate indexes). It seems it doesn't lack any index. But as an
>> example, a particular query that joins 12 tables and returns 136,000
>> rows almost takes about 40 seconds in best time (sometimes 100 seconds).
>> I believe that index/query tuning cannot make this query better, only
>> hardware upgrade can help. I'd like to know upgrade to which hardware
>> components can help such queries?
>> Some information that might help:
>> CPU: 2
>> Memory: 1GB
>> Disk: 1
>> Cache Hit Ratio: above 99%
>> Concurrent Connections: 40-50
>> It is mentionable that there is no bottleneck while monitoring with
>> Performance Monitor in any hardware component like CPU, Disk, Memory.
>> Also this query doesn't have complex where condition, but it has a lot
>> of CASE in columns.
>> Any help would be greatly appreciated,
>> Leila
>>
>|||Because of outer joins, Indexed View cannot be used
"Warren Brunk" <wbrunk@.techintsolutions.com> wrote in message
news:uo5zHd%235GHA.4116@.TK2MSFTNGP03.phx.gbl...
>I cant really speak to the hard ware side of things but as a practice for
>large joins like this I like to use Indexed Views and query the views
>intead.
> Read this article for some insight...
> http://www.microsoft.com/technet/prodtechnol/sql/2005/ipsql05iv.mspx
> thanks,
> --
> /*
> Warren Brunk - MCITP - SQL 2005, MCDBA
> www.techintsolutions.com
> */
>
> "Leila" <Leilas@.hotpop.com> wrote in message
> news:OOuN1Q%235GHA.3292@.TK2MSFTNGP02.phx.gbl...
>> Hi,
>> I have several queries in a complicated database that join almost 10
>> tables each. The query plans are good because there are a lot of index
>> seeks and merge/loop joins (Hash join have been eliminated by appropriate
>> indexes). It seems it doesn't lack any index. But as an example, a
>> particular query that joins 12 tables and returns 136,000 rows almost
>> takes about 40 seconds in best time (sometimes 100 seconds). I believe
>> that index/query tuning cannot make this query better, only hardware
>> upgrade can help. I'd like to know upgrade to which hardware components
>> can help such queries?
>> Some information that might help:
>> CPU: 2
>> Memory: 1GB
>> Disk: 1
>> Cache Hit Ratio: above 99%
>> Concurrent Connections: 40-50
>> It is mentionable that there is no bottleneck while monitoring with
>> Performance Monitor in any hardware component like CPU, Disk, Memory.
>> Also this query doesn't have complex where condition, but it has a lot of
>> CASE in columns.
>> Any help would be greatly appreciated,
>> Leila
>|||> Memory: 1GB
This is a bit low by today's standards. More memory can reduce disk i/o.
> Disk: 1
So you have only one disk drive on the server? No RAID? Data and log on
the same drive? I would at least add disks to ensure fault tolerance and
separate data and log files.
> It is mentionable that there is no bottleneck while monitoring with
> Performance Monitor in any hardware component like CPU, Disk, Memory.
What is the physical disk transfers/sec and bytes/sec on the disk? Note
that a disk bottleneck will not necessarily manifest itself as queue time.
You need to know the transfers/sec your particular disk can sustain in order
to determine whether or not you have a disk bottleneck.
> But as an example, a particular query that joins 12 tables and returns
> 136,000 rows almost takes about 40 seconds in best time (sometimes 100
> seconds).
What is your maximum response time requirement for this query? 136,000 rows
is a lot of data to return and client-site processing of large data volumes
like this can be more significant than query execution time. It would be
embarrassing to throw server hardware at a performance problem that turns
out to be on the client side ;-)
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Leila" <Leilas@.hotpop.com> wrote in message
news:OOuN1Q%235GHA.3292@.TK2MSFTNGP02.phx.gbl...
> Hi,
> I have several queries in a complicated database that join almost 10
> tables each. The query plans are good because there are a lot of index
> seeks and merge/loop joins (Hash join have been eliminated by appropriate
> indexes). It seems it doesn't lack any index. But as an example, a
> particular query that joins 12 tables and returns 136,000 rows almost
> takes about 40 seconds in best time (sometimes 100 seconds). I believe
> that index/query tuning cannot make this query better, only hardware
> upgrade can help. I'd like to know upgrade to which hardware components
> can help such queries?
> Some information that might help:
> CPU: 2
> Memory: 1GB
> Disk: 1
> Cache Hit Ratio: above 99%
> Concurrent Connections: 40-50
> It is mentionable that there is no bottleneck while monitoring with
> Performance Monitor in any hardware component like CPU, Disk, Memory. Also
> this query doesn't have complex where condition, but it has a lot of CASE
> in columns.
> Any help would be greatly appreciated,
> Leila
>|||Thanks indeed Dan!
Today I discovered very strange thing that almost solved the problem! We
added another 512MB memory to the server, now it has 1.5GB. I tried that
particular query again, the result was the same, about 40 seconds. I stopped
the SQL Server service and restarted it manually. I couldn't believe the
result! The query took 11 seconds at first try, and when cached it stayed on
8 seconds for several tries! I restarted the windows again, and then tried
the query after booting; again 40 seconds. Then manually stopped and
restarted SQL Server service and it gained 8 seconds again!
I monitored the memory consumed by SQL Server. Before manual restart of
service, SQL Server consumed memory very slowly, and it took 40 seconds to
fill about 120MB of RAM. After manual restart, the memory consumed by SQL
Server(up to 120MB) was growing too fast (8 seconds)!
I have no idea that why a manual restart of service must have this effect! I
am suspected that it can be a bug in Windows 2003.
However I must take this server back to production environment to test it
under load of users.
Any ideas?...
Thanks again,
Leila
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:%23pqMKZH6GHA.940@.TK2MSFTNGP03.phx.gbl...
>> Memory: 1GB
> This is a bit low by today's standards. More memory can reduce disk i/o.
>> Disk: 1
> So you have only one disk drive on the server? No RAID? Data and log on
> the same drive? I would at least add disks to ensure fault tolerance and
> separate data and log files.
>> It is mentionable that there is no bottleneck while monitoring with
>> Performance Monitor in any hardware component like CPU, Disk, Memory.
> What is the physical disk transfers/sec and bytes/sec on the disk? Note
> that a disk bottleneck will not necessarily manifest itself as queue time.
> You need to know the transfers/sec your particular disk can sustain in
> order to determine whether or not you have a disk bottleneck.
>> But as an example, a particular query that joins 12 tables and returns
>> 136,000 rows almost takes about 40 seconds in best time (sometimes 100
>> seconds).
> What is your maximum response time requirement for this query? 136,000
> rows is a lot of data to return and client-site processing of large data
> volumes like this can be more significant than query execution time. It
> would be embarrassing to throw server hardware at a performance problem
> that turns out to be on the client side ;-)
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Leila" <Leilas@.hotpop.com> wrote in message
> news:OOuN1Q%235GHA.3292@.TK2MSFTNGP02.phx.gbl...
>> Hi,
>> I have several queries in a complicated database that join almost 10
>> tables each. The query plans are good because there are a lot of index
>> seeks and merge/loop joins (Hash join have been eliminated by appropriate
>> indexes). It seems it doesn't lack any index. But as an example, a
>> particular query that joins 12 tables and returns 136,000 rows almost
>> takes about 40 seconds in best time (sometimes 100 seconds). I believe
>> that index/query tuning cannot make this query better, only hardware
>> upgrade can help. I'd like to know upgrade to which hardware components
>> can help such queries?
>> Some information that might help:
>> CPU: 2
>> Memory: 1GB
>> Disk: 1
>> Cache Hit Ratio: above 99%
>> Concurrent Connections: 40-50
>> It is mentionable that there is no bottleneck while monitoring with
>> Performance Monitor in any hardware component like CPU, Disk, Memory.
>> Also this query doesn't have complex where condition, but it has a lot of
>> CASE in columns.
>> Any help would be greatly appreciated,
>> Leila
>|||You might take a look at the execution plans. I would expect these to be
the same with the same query/data but the only way to make sure is to check.
Given the same plans, resource contention is the most likely cause. I
suggest you monitor CPU and disk to see if you can spot what the contention
might be during slowness.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Leila" <Leilas@.hotpop.com> wrote in message
news:egcODSL6GHA.4476@.TK2MSFTNGP04.phx.gbl...
> Thanks indeed Dan!
> Today I discovered very strange thing that almost solved the problem! We
> added another 512MB memory to the server, now it has 1.5GB. I tried that
> particular query again, the result was the same, about 40 seconds. I
> stopped the SQL Server service and restarted it manually. I couldn't
> believe the result! The query took 11 seconds at first try, and when
> cached it stayed on 8 seconds for several tries! I restarted the windows
> again, and then tried the query after booting; again 40 seconds. Then
> manually stopped and restarted SQL Server service and it gained 8 seconds
> again!
> I monitored the memory consumed by SQL Server. Before manual restart of
> service, SQL Server consumed memory very slowly, and it took 40 seconds to
> fill about 120MB of RAM. After manual restart, the memory consumed by SQL
> Server(up to 120MB) was growing too fast (8 seconds)!
> I have no idea that why a manual restart of service must have this effect!
> I am suspected that it can be a bug in Windows 2003.
> However I must take this server back to production environment to test it
> under load of users.
> Any ideas?...
> Thanks again,
> Leila
>
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:%23pqMKZH6GHA.940@.TK2MSFTNGP03.phx.gbl...
>> Memory: 1GB
>> This is a bit low by today's standards. More memory can reduce disk i/o.
>> Disk: 1
>> So you have only one disk drive on the server? No RAID? Data and log on
>> the same drive? I would at least add disks to ensure fault tolerance and
>> separate data and log files.
>> It is mentionable that there is no bottleneck while monitoring with
>> Performance Monitor in any hardware component like CPU, Disk, Memory.
>> What is the physical disk transfers/sec and bytes/sec on the disk? Note
>> that a disk bottleneck will not necessarily manifest itself as queue
>> time. You need to know the transfers/sec your particular disk can sustain
>> in order to determine whether or not you have a disk bottleneck.
>> But as an example, a particular query that joins 12 tables and returns
>> 136,000 rows almost takes about 40 seconds in best time (sometimes 100
>> seconds).
>> What is your maximum response time requirement for this query? 136,000
>> rows is a lot of data to return and client-site processing of large data
>> volumes like this can be more significant than query execution time. It
>> would be embarrassing to throw server hardware at a performance problem
>> that turns out to be on the client side ;-)
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Leila" <Leilas@.hotpop.com> wrote in message
>> news:OOuN1Q%235GHA.3292@.TK2MSFTNGP02.phx.gbl...
>> Hi,
>> I have several queries in a complicated database that join almost 10
>> tables each. The query plans are good because there are a lot of index
>> seeks and merge/loop joins (Hash join have been eliminated by
>> appropriate indexes). It seems it doesn't lack any index. But as an
>> example, a particular query that joins 12 tables and returns 136,000
>> rows almost takes about 40 seconds in best time (sometimes 100 seconds).
>> I believe that index/query tuning cannot make this query better, only
>> hardware upgrade can help. I'd like to know upgrade to which hardware
>> components can help such queries?
>> Some information that might help:
>> CPU: 2
>> Memory: 1GB
>> Disk: 1
>> Cache Hit Ratio: above 99%
>> Concurrent Connections: 40-50
>> It is mentionable that there is no bottleneck while monitoring with
>> Performance Monitor in any hardware component like CPU, Disk, Memory.
>> Also this query doesn't have complex where condition, but it has a lot
>> of CASE in columns.
>> Any help would be greatly appreciated,
>> Leila
>>
>

Friday, March 9, 2012

Hardware configuration help for MS SQL server

This is what one of the tables looks like - they are all similar and there
are about 60 different tables:

CREATE TABLE SalesData1(
smalldatetime TimeStamp,
varchar(8) CustomerID,
numeric S1,
numeric S2,
numeric S3,
numeric S4,
numeric X
)

I'm not sure on the data types, S1 thru S4 are actually float values with
two decimal places. X is an unsigned long value.

Anyway, the bigger question is what do we need at *minimum* for hard drives
and RAM, if we are using SQL 7.0* under these circumstances:

1. Inserting up to 600,000 records every 10 minutes
2. Updating up to 70,000 records every 10 minutes

We can use BCP for inserting the data. There are many different tables with
an average of 10,000 records being inserted into each one every 10 minutes.

I nearly ordered a couple servers with dual 400GB SATA hard drives using
RAID1, but someone told me that with only 2 drives - it would cause SQL
server to work harder. So do we need a whole bunch of smaller 40GB drives??
Should they be SATA drives?

Could one server with dual Xeon CPUs do this, or do we need more than one
server?
And would 2GB of RAM be sufficient?

All I know is that we will be inserting 600k records, updating 70k records,
and reading 500,000 every 10 minutes.

Do I need to provide more info on anything specifically?

We should have an SQL consultant helping us in the near future - I'm just
trying to get a head start with the hardware. Our budget is small, in fact I
may have to build the servers myself out of spare parts and things on sale
at Fry's :)

Thanks for your help. I will provide more info if needed.

*Can't afford to upgrade to SQL 2000I highly recommend that you check out Kevin Kline's article Bare Metal
Tuning in SQL Server Magazine. He gives a step-by-step breakdown of
how changes in ahrdware affect performance; granted, it's written from
a SQL 2000 perspective, but it might give you some ideas.

http://www.windowsitpro.com/Article...46492.html?Ad=1

Note that the article requires a subscription, but it's a great
investment.

Stu|||"Stu" <stuart.ainsworth@.gmail.com> wrote in message
news:1122484729.540188.299270@.f14g2000cwb.googlegr oups.com...
> I highly recommend that you check out Kevin Kline's article Bare Metal
> Tuning in SQL Server Magazine. He gives a step-by-step breakdown of
> how changes in ahrdware affect performance; granted, it's written from
> a SQL 2000 perspective, but it might give you some ideas.
> http://www.windowsitpro.com/Article...46492.html?Ad=1
> Note that the article requires a subscription, but it's a great
> investment.
> Stu

Based on that article, how does this look (if I understand correctly):

Dual 64bit AMD processors, 4GB fast RAM.

Drive C: is 20GB SATA drive using FAT32 without RAID. The OS goes here.

Drive E: is 40GB total storage RAID 1 (two 80GB SATA drives) with NTFS. SQL
program files and logs go on here.

Drive F: to Q: are 40GB drives using NTFS and RAID 5. Data storage goes
here.

Thanks|||Actually one thing he mentioned in his presentation was that AMD
Opterons actually performed worse than 32-bit XEON processors, although
he didn't mention it in his article. XEON processors act like dual
standard CPU, so two XEON's = 4 virtual processors.

If you have RAID 5, then you wouldn't have logical drives F through Q;
it would just show up as F. Be sure that you're using hardware RAID,
and not software-based RAID.

Other than that, it looks good. Get the biggest processors you can; fo
that much data, you'll need it.

Hardware configuration

Hi Friends,
This is my first post here. Hope for a good response from the gurus ;-)
here it is...
We have a medium size database (upto 50 tables) out of which one table is a massive. It gets populated online at very high speed (> 20,000 records a day) so it grows quite large in month. Now the problem is when it grows to that extent in a month at the end of 2 months any query fired on it takes toooo long to execute (to the magnitude of 10 minutes ). We have following hardware configuration

1. 1.2 GHz Pentium 4
2. 40 GB HDD
3. 384 mb RAM.

What should be the server configuration to boost the performance of the database??

Regards,
Vishal.It may not be your hardware configuration. How often do you Reorg the database?|||Currently we only backup the table( The Big one) and trim it to achieve the speed. Is this what U R talking about?? Is there any thing else we need to do in order to reorg the db??|||after 2 months you should have something like 1200000 recrods yes?

what sort of data is in these records?

what indexes do you have on the table?

what are the searches being performed?

I imagine that if you applied some suitable indexes and possibly did some restructuring you should have not problems with this amount of data.

Yes, eventually it will slow as more data comes on board but no where near the 10 mins you are talking about....|||What rokslide mentioned will help greatly. But Reorg is a quick fix to boost performance for a db with large amount of reads and writes because it rearrange the data and index pages. To do a reorg just simply use Database Maintenance Plan Wizard and check the Reorganize Data and Index Page. Execute the plan once a month, it will greatly help the performance of your database.

Wednesday, March 7, 2012

hard query

hello

i have a proplem in query

i have 2 tables in my sql db one named stuednt include fields(id,name)

and one table named stu_cources include fields(id,course_name)

ok

i want to query the student that have courses EX. mcse

the result that i want from 2 tables

ID | NAME | Coures_NAME

in MSHFLEXGRID1

any one help me plz ...

I am assuming that the ID column in Student is an Identity and that the ID in Stu_Courses is a ForeignKey that relates back to the Student in the Students table.

Select S.ID, S.Name, SC.Course_Name
FROM Student S
INNER JOIN Stu_Courses SC ON S.ID = SC.ID
WHERE SC.Course_Name = 'MCSE'

|||

hi,

i don't know is the id in your courses table is the cource id or a student id?

i guess student table is basic table , and courses table is basic table if you gonna try to connect them you will have to type a cource per student i guess you have to add third table to get records from both basic tables to use as data entery (reocrdID,StudentID,CourceID)

in general to select related data from different tables you can use (from table1 inner Join table2 on table1.id = table2.relatedidfield)

best regards

|||just reference the corresponding tuple table.field in the select statement

Friday, February 24, 2012

Handling autogenerated PK/FK conflicts in merge replication

We use autogenerated primary keys in most of our tables. Some of these keys are also foreign keys in other tables. Right now there is only 1 database sever at a central location. But now there is a need to have multiple database servers at different locations. Data from these remote sites needs to be replicated to the central server. Some data would also distribute from central server to selected remote sites.

If I could resdesign, I would have choosen something like GUIDs for the primary keys or combination of something like ServerName and AutoGenerated number as a combined key. But that's not possible right now. How do I handle merge replication conflicts in this case?

I am looking for some pointers as to how to handle this case. If it were just simple table with 1 primary key, that would be easy as I can throw the primary key on remote server and let the central server create a new key when data is inserted. But in my case, a single table can be related to 5 or more other tables through these autogenerated keys. Any help is much appreciated.

GUID would be easiest however you can also make a composite key with ID and hostname. That way the key will always be unique.

Would not recommend conflict resolver for the primary key, you will end up with a mess eventually i think.

Martin

|||

you can also use identity ranges for subscribers so they don't conflict when they upload back to the publisher server.

Handling autogenerated PK/FK conflicts in merge replication

We use autogenerated primary keys in most of our tables. Some of these keys are also foreign keys in other tables. Right now there is only 1 database sever at a central location. But now there is a need to have multiple database servers at different locations. Data from these remote sites needs to be replicated to the central server. Some data would also distribute from central server to selected remote sites.

If I could resdesign, I would have choosen something like GUIDs for the primary keys or combination of something like ServerName and AutoGenerated number as a combined key. But that's not possible right now. How do I handle merge replication conflicts in this case?

I am looking for some pointers as to how to handle this case. If it were just simple table with 1 primary key, that would be easy as I can throw the primary key on remote server and let the central server create a new key when data is inserted. But in my case, a single table can be related to 5 or more other tables through these autogenerated keys. Any help is much appreciated.

GUID would be easiest however you can also make a composite key with ID and hostname. That way the key will always be unique.

Would not recommend conflict resolver for the primary key, you will end up with a mess eventually i think.

Martin

|||

you can also use identity ranges for subscribers so they don't conflict when they upload back to the publisher server.