Showing posts with label calculation. Show all posts
Showing posts with label calculation. Show all posts

Monday, March 12, 2012

I am struggling through views

Hi there, it's me again. I am having trouble with a view again. I am
trying to do a calculation, but there are some checks that need to be
taken into consideration. Maybe a view is not the right way to deal
with this. I don't know.

This is the beginning of my query.

SELECT coalesce(f.filenumber, i.filenumber) as filenumber,
i.InvoiceNumber, i.InvoiceValue, il.lineid, MPF = .21 * (il.UnitCost *
il.UnitQty + il.AddMMV - il.MinusMMV - il.MinusNDC + il.ErrorAmt)

FROM tblFILE f inner join tblINVOICE i on (f.filenumber =
i.filenumber) left outer join tblINVOICE_LINE il on (i.Invoiceid =
il.invoiceid)

This works just as it should. However, if the Sum of all MPFs per
file total less than 25.00 or more than 485.00 then each MPF has to be
recalculated as:

Percentage of TotalEnteredValue = (InvoiceValue / (il.UnitCost *
il.UnitQty + il.AddMMV - il.MinusMMV - il.MinusNDC + il.ErrorAmt)

Percentage of TotalEnteredValue * 25.00 = MPF or
Percentage of TotalEnteredValue * 485.00 = MPF

Can you do something like this in a View? Or do I need to do
something like a trigger?

I greatly appreciate all help. I am struggling to get a foothold on
views. I am getting there.Well, I am getting desperate. I must have explained myself poorly. I
apologize. I absolutely cannot figure out which approach would best
suit my need because I don't know about any. I am still a newbie. I
am pretty good at the basics of sql server 2000, but I am totally an
idiot when it comes to anything complicated or outside of what I know,
which is tables, a little stored procedure and some view stuff. I
have been looking on Books Online but still cannot figure out what my
best approach should be.

I have tables and a view...

CREATE TABLE tblFILE (
FileNumber bigint not null primary key
)

CREATE TABLE tblINVOICE(
InvoiceID bigint not null primary key
identity,
InvoiceNumber varchar(35) not null,
FileNumber bigint not null,

CONSTRAINT FK_tblFILE FOREIGN KEY (FileNumber) REFERENCES
tblFILE(FileNumber)
)

CREATE TABLE tblINVOICE_LINE (
LineID bigint not null primary key
identity,
InvoiceID bigint not null,
UnitCost money,
UnitQty money,
AddMMV money,
MinusMMV money,
MinusNDC money,
ErrorAmt money,

CONSTRAINT FK_tblINVOICE FOREIGN KEY (InvoiceID) REFERENCES
tblINVOICE(InvoiceID)

)

CREATE VIEW TEV_view AS (

SELECT i.FileNumber, COALESCE (i.InvoiceID, il.InvoiceID) AS
InvoiceID, i.InvoiceNumber, il.LineID, il.UnitQty, il.UnitCost,
il.AddMMV, il.MinusNDC, il.ErrorAmt, il.UnitQty * il.UnitCost +
il.AddMMV - il.MinusMMV - il.MinusNDC + il.ErrorAmt AS LineEV, .0021 *
(il.UnitQty * il.UnitCost + il.AddMMV - il.MinusMMV - il.MinusNDC +
il.ErrorAmt) AS MPFbyLine
FROM dbo.tblINVOICE i INNER JOIN
dbo.tblINVOICE_LINE il ON i.InvoiceID = il.InvoiceID
)

So, my MPFbyLine is where my headache, confusion, frustration,
desperation is coming from. A file has many invoices...each invoice
has many lines. Each line has an MPFbyLine value. This is all fine
and dandy...except if the sum of all MPFbyLine values for a file is
less than 25.00 or more than 485.00.

If this is the case then the MPFbyLine for each line in that file must
be recalculated using a formula other than the (.0021 * LineEV).
Instead, the sum of all LineEVs for the file must be deteremined, and
then each LineEV for the file must be divided by the Sum of all to get
each line's percentage of value of the file. Then the new MPFbyLine
is calculated as the line's percentage of value * 25.00min or
485.00max to get its new value.

So, I figured I must need to use Transact SQL somehow, possibly in a
trigger. Or maybe i need a calculated field, which I have found
reference to. Either way I am stumped. I have been messing with the
below trigger, but I feel like I am on the wrong track. I am sorry if
this is longwinded...I really need help. I think I am better off
doing this in sql than programming it directly into by vb application.
I would really, really appreciate any help.

CREATE TRIGGER CalcMPF_trg ON [dbo].[TEV_view]
FOR INSERT, UPDATE, DELETE
AS

DECLARE @.Minmoney,
@.Max money,
@.TEVmoney,
@.TotMPF money,
@.PercentOfTEV float

Set @.Min = 25.00
Set @.Max = 485.00

Select SUM(MPFByLine), SUM(LineEV) from dbo.TEV_view
Group by Filenumber
Set @.TEV = SUM(LineEV)
Set @.TotMPF = SUM(MPFByLine)

If @.TotMPF < @.Min then
@.PercentOfTEV = @.TEV|||Without any DDL, all anyone can do is guess. And under reasonable
assumptions, the code looks bad. For example, why are you writing
"COALESCE(F.file_nbr, I.file_nbr) AS file_nbr" when you join the two
tables on (F.file_nbr = I.file_nbr) so neither can be NULL??

Why do you do a left outer join on invoices and invoice lines do you
have invoices without any lines? As an aside, I hope that your data
model is not so screwed up that you mimicked the PHYSICAL layout of
the paper order form and actually copied the lines from the form
instead of putting each product in a row in an "Invoice Details" or
"Invoice Items" table. But with things like "line_id", it certainly
looks like it!

Think about the same item appearing in multiple PHYSICAL lines of a
paper form and all the screws up it causes.

Why do you have both an "invoice_nbr" and an "invoiceid"? Never use
two names for one data element.

Why do you have "add_mmv" and "minus_mmv", but no plain, simple, mmv?
Why is "unit_qty" not plain old "qty_ordered"? In fact try to define
what "unit_qty" means. Data element names like this make no sense;
please read the ISO-11179 rules.

Please tell me that "I.invoice_value" is not a computed value, like
the total of the items, mixed into the same level of aggregation as
its components.

You mix Standard SQL and proprietary code, use silly prefixes, and
singular names for tables, etc. in short, this needs some clean up
so I made few guesses

Since a change to any order could cause you to re-calculate
everything, put this in a VIEW and then filter the view when you use
it. First calculate everything you will need let the optimizer
worry about factoring sub-expressions.

CREATE VIEW Foobar
(file_nbr, invoice_nbr, invoice_value, line_id, normal_mpf, low_mpf,
high_mpf)
AS
SELECT F.file_nbr, I.invoice_nbr, I.invoice_value, L.line_id,
(0.21 * (L.unit_cost * L.qty_ordered + L.add_mmv - L.minus_mmv -
L.minus_ndc + L.error_amt) AS normal_mpf,

((I.invoice_value / (L.unit_cost *
L.unitqty + L.add_mmv - L.minus_mmv - L.minus_ndc + L.error_amt)
* 25.00) AS low_mpf,

((I.invoice_value / (L.unit_cost *
L.unitqty + L.add_mmv - L.minus_mv - L.minus_ndc + L.error_amt)
* 485.00) AS high_mpf

FROM Files AS F, Invoices AS I, Invoice_items AS L
WHERE F.file_nbr = I.file_nbr
AND I.invoice_nbr = L.invoice_nbr);

Now the view needs to do a self-reference, and you could make this
into a VIEW also.

SELECT file_nbr, invoice_nbr, invoice_value, line_id,
CASE WHEN (SELECT SUM(normal_mpf) FROM Foobar AS F1
WHERE F1.file_nbr = F0.file_nbr) < 25.00
THEN low_mpf
WHEN (SELECT SUM(normal_mpf) FROM Foobar AS F1
WHERE F1.file_nbr = F0.file_nbr) > 485.00
THEN high_mpf
ELSE normal_mpf END AS mpf
FROM Foobar AS F0;

Was that sufficiently brutal?|||[posted and mailed, please reply in news]

Rowan (phantomtoe@.yahoo.com) writes:
> So, my MPFbyLine is where my headache, confusion, frustration,
> desperation is coming from. A file has many invoices...each invoice
> has many lines. Each line has an MPFbyLine value. This is all fine
> and dandy...except if the sum of all MPFbyLine values for a file is
> less than 25.00 or more than 485.00.
> If this is the case then the MPFbyLine for each line in that file must
> be recalculated using a formula other than the (.0021 * LineEV).
> Instead, the sum of all LineEVs for the file must be deteremined, and
> then each LineEV for the file must be divided by the Sum of all to get
> each line's percentage of value of the file. Then the new MPFbyLine
> is calculated as the line's percentage of value * 25.00min or
> 485.00max to get its new value.

First, I am sorry that the answer you got from Joe Celko was so completely
inappropriate. He knows his SQL (which is the the same SQL that you and
I use), but his social capabilities is lacking. And apparently he has not
the slightest understanding that some people who post here have a fairly
weak experience of SQL.

Anyway, I looked at your problem, and I think I have a solution. I could
completely make out the exact rules when the total exceeded 485 or was
below 25. But hopefully this gives you some help to find out the rest
yourself.

You discuss the possibility of a computed column, and I have introduced
one. Not so much that it helps us to write the view, but only to make
it less verbose. So this is how tblINVOICE_LINE looks like:

CREATE TABLE tblINVOICE_LINE (
LineID bigint not null primary key identity,
InvoiceID bigint not null,
UnitCost money,
UnitQty money,
AddMMV money,
MinusMMV money,
MinusNDC money,
ErrorAmt money,
LineEV AS UnitQty * UnitCost + AddMMV - MinusMMV - MinusNDC + ErrorAmt,
CONSTRAINT FK_tblINVOICE FOREIGN KEY (InvoiceID) REFERENCES
tblINVOICE(InvoiceID)

)

(By the way, I recommend to always include explicit NULL and NOT NULL
indicators for all columns.)

And here is the view:

CREATE VIEW TEV_view AS
SELECT i.FileNumber, COALESCE (i.InvoiceID, il.InvoiceID) AS
InvoiceID, i.InvoiceNumber, il.LineID, il.UnitQty, il.UnitCost,
il.AddMMV, il.MinusNDC, il.ErrorAmt, il.LineEV,
MPFbyLine = CASE WHEN t.totalEV <= 25 THEN il.LineEV / t.totalEV * 25
WHEN t.totalEV >= 485 THEN il.LineEV / t.totalEV * 480
ELSE 0.0021 * il.LineEV
END
FROM tblINVOICE i
JOIN tblINVOICE_LINE il ON i.InvoiceID = il.InvoiceID
JOIN (SELECT i.FileNumber, totalEV = SUM(il.LineEV)
FROM tblINVOICE i
JOIN tblINVOICE_LINE il ON i.InvoiceID = il.InvoiceID
GROUP BY i.FileNumber) AS t ON i.FileNumber = t.FileNumber

I've added a derived table which gives you the totalEV per file.
Derived tables is a very powerful feature in SQL. You can see them
as temporary tables within the query. Logically that is. They are
never materialized, and the optimizer may perform shortcuts with
the rest of the query as long as the result is correct.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thank you very much, Erland. Your answer here and in previous posts
have been greatly appreciated.

Erland Sommarskog <sommar@.algonet.se> wrote in message news:<Xns93E0E4AED352Yazorman@.127.0.0.1>...
> [posted and mailed, please reply in news]
> Rowan (phantomtoe@.yahoo.com) writes:
> > So, my MPFbyLine is where my headache, confusion, frustration,
> > desperation is coming from. A file has many invoices...each invoice
> > has many lines. Each line has an MPFbyLine value. This is all fine
> > and dandy...except if the sum of all MPFbyLine values for a file is
> > less than 25.00 or more than 485.00.
> > If this is the case then the MPFbyLine for each line in that file must
> > be recalculated using a formula other than the (.0021 * LineEV).
> > Instead, the sum of all LineEVs for the file must be deteremined, and
> > then each LineEV for the file must be divided by the Sum of all to get
> > each line's percentage of value of the file. Then the new MPFbyLine
> > is calculated as the line's percentage of value * 25.00min or
> > 485.00max to get its new value.
> First, I am sorry that the answer you got from Joe Celko was so completely
> inappropriate. He knows his SQL (which is the the same SQL that you and
> I use), but his social capabilities is lacking. And apparently he has not
> the slightest understanding that some people who post here have a fairly
> weak experience of SQL.
> Anyway, I looked at your problem, and I think I have a solution. I could
> completely make out the exact rules when the total exceeded 485 or was
> below 25. But hopefully this gives you some help to find out the rest
> yourself.
> You discuss the possibility of a computed column, and I have introduced
> one. Not so much that it helps us to write the view, but only to make
> it less verbose. So this is how tblINVOICE_LINE looks like:
> CREATE TABLE tblINVOICE_LINE (
> LineID bigint not null primary key identity,
> InvoiceID bigint not null,
> UnitCost money,
> UnitQty money,
> AddMMV money,
> MinusMMV money,
> MinusNDC money,
> ErrorAmt money,
> LineEV AS UnitQty * UnitCost + AddMMV - MinusMMV - MinusNDC + ErrorAmt,
> CONSTRAINT FK_tblINVOICE FOREIGN KEY (InvoiceID) REFERENCES
> tblINVOICE(InvoiceID)
> )
> (By the way, I recommend to always include explicit NULL and NOT NULL
> indicators for all columns.)
> And here is the view:
> CREATE VIEW TEV_view AS
> SELECT i.FileNumber, COALESCE (i.InvoiceID, il.InvoiceID) AS
> InvoiceID, i.InvoiceNumber, il.LineID, il.UnitQty, il.UnitCost,
> il.AddMMV, il.MinusNDC, il.ErrorAmt, il.LineEV,
> MPFbyLine = CASE WHEN t.totalEV <= 25 THEN il.LineEV / t.totalEV * 25
> WHEN t.totalEV >= 485 THEN il.LineEV / t.totalEV * 480
> ELSE 0.0021 * il.LineEV
> END
> FROM tblINVOICE i
> JOIN tblINVOICE_LINE il ON i.InvoiceID = il.InvoiceID
> JOIN (SELECT i.FileNumber, totalEV = SUM(il.LineEV)
> FROM tblINVOICE i
> JOIN tblINVOICE_LINE il ON i.InvoiceID = il.InvoiceID
> GROUP BY i.FileNumber) AS t ON i.FileNumber = t.FileNumber
> I've added a derived table which gives you the totalEV per file.
> Derived tables is a very powerful feature in SQL. You can see them
> as temporary tables within the query. Logically that is. They are
> never materialized, and the optimizer may perform shortcuts with
> the rest of the query as long as the result is correct.

Wednesday, March 7, 2012

HyperThreading and NT Fibers

We've got a customer whose system is performing below average. Example, a billing calculation that takes 13 minutes on my laptop (XP PRO, 2.2 GHz, 1GB RAM, 4200RPM HD, SQL 2000 SP3) takes 28 minutes on their server (WIN2000, 2-1.5GHz, 2GB RAM, RAID5 array
, SQL 2000 SP3). We turned on the performance monitors recently and watched a bill run. The hard drives and RAM were barely used while the 4 processor instances (2 physical = 4 logical with hyperthread) were all consistently above 90%.
We deduced that the processors were the obvious bottleneck but we are unsure why a less powerful machine (my laptop) out-performs the more powerful server. The only thing I can think is that the hyper-threading or NT Fiber setting is adversely affecting t
he processors.
Has anyone had any problems with hyperthreading or NT fibers or have any ideas as to where I might start looking?
Thanks in advance,
Dean
Does the calculation use the same query plan on both systems?
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Dean" <anonymous@.discussions.microsoft.com> wrote in message
news:EE7A4018-48E7-405C-90F2-2CF0FD54F256@.microsoft.com...
> We've got a customer whose system is performing below average. Example, a
billing calculation that takes 13 minutes on my laptop (XP PRO, 2.2 GHz, 1GB
RAM, 4200RPM HD, SQL 2000 SP3) takes 28 minutes on their server (WIN2000,
2-1.5GHz, 2GB RAM, RAID5 array, SQL 2000 SP3). We turned on the performance
monitors recently and watched a bill run. The hard drives and RAM were
barely used while the 4 processor instances (2 physical = 4 logical with
hyperthread) were all consistently above 90%.
> We deduced that the processors were the obvious bottleneck but we are
unsure why a less powerful machine (my laptop) out-performs the more
powerful server. The only thing I can think is that the hyper-threading or
NT Fiber setting is adversely affecting the processors.
> Has anyone had any problems with hyperthreading or NT fibers or have any
ideas as to where I might start looking?
> Thanks in advance,
> Dean
>
|||You can't compare these two systems side by side. For instance your laptop
has a faster processor (even though it only has one) and most likely has
more level 1 cache. The disk in the laptop probably has more cache as well
which can make a big difference over a Raid 5. That said you can still be
running into a multi-processor issue but it is hard to say for sure. Try
running the process with MAXDOP set to 1 and see if that makes a difference.
Are you sure the disk queues are fine? Is there just one RAID 5 and is
everything (data, tempdb, logs) on the same array?
Andrew J. Kelly
SQL Server MVP
"Dean" <anonymous@.discussions.microsoft.com> wrote in message
news:EE7A4018-48E7-405C-90F2-2CF0FD54F256@.microsoft.com...
> We've got a customer whose system is performing below average. Example, a
billing calculation that takes 13 minutes on my laptop (XP PRO, 2.2 GHz, 1GB
RAM, 4200RPM HD, SQL 2000 SP3) takes 28 minutes on their server (WIN2000,
2-1.5GHz, 2GB RAM, RAID5 array, SQL 2000 SP3). We turned on the performance
monitors recently and watched a bill run. The hard drives and RAM were
barely used while the 4 processor instances (2 physical = 4 logical with
hyperthread) were all consistently above 90%.
> We deduced that the processors were the obvious bottleneck but we are
unsure why a less powerful machine (my laptop) out-performs the more
powerful server. The only thing I can think is that the hyper-threading or
NT Fiber setting is adversely affecting the processors.
> Has anyone had any problems with hyperthreading or NT fibers or have any
ideas as to where I might start looking?
> Thanks in advance,
> Dean
>
|||I expect so. On my laptop I just took a copy of their database and re-ran the process.
|||We monitored the server with performance counters on the processors, RAM and drives. Yes, it is one big RAID partition but the monitors showed almost no activity on the disks and very little on the RAM while the processors were very high.
|||Look at the estimated query execution plan on both systems. Also, check the
disk queue length on our data drives. A long queue length may indicate a
bottleneck at the hardware layer.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Dean" <anonymous@.discussions.microsoft.com> wrote in message
news:601177D1-D700-441B-9607-A15EA29855B4@.microsoft.com...
> I expect so. On my laptop I just took a copy of their database and re-ran
the process.
|||I just resolved a performance issue related to the fact that the execution plan was different between my development & production.
It was directly related to differences in hardware, specifically the environments you describe. I wouldn't assume that the execution
plans are the same. I'd run an execution plan in both environments to confirm.
If you find that the execution plans are different (and parallelism is involved), I would suggest looking up the query hint OPTION
(MAXDOP 1) in BOL.
ChrisG
|||I would still change the MAXDOP and see what happens.
Andrew J. Kelly
SQL Server MVP
"Dean" <anonymous@.discussions.microsoft.com> wrote in message
news:827EF5AF-5FC0-43FD-BFD5-BD7D7A89C3D8@.microsoft.com...
> We monitored the server with performance counters on the processors, RAM
and drives. Yes, it is one big RAID partition but the monitors showed almost
no activity on the disks and very little on the RAM while the processors
were very high.
|||Exactly, I had this issue when we were doing the migration, taking half time when we turned that off. Is that a Dell machine?
Yes, you should turn that off.
|||I'll try it and see what happens...

HyperThreading and NT Fibers

We've got a customer whose system is performing below average. Example, a bi
lling calculation that takes 13 minutes on my laptop (XP PRO, 2.2 GHz, 1GB R
AM, 4200RPM HD, SQL 2000 SP3) takes 28 minutes on their server (WIN2000, 2-1
.5GHz, 2GB RAM, RAID5 array
, SQL 2000 SP3). We turned on the performance monitors recently and watched
a bill run. The hard drives and RAM were barely used while the 4 processor i
nstances (2 physical = 4 logical with hyperthread) were all consistently abo
ve 90%.
We deduced that the processors were the obvious bottleneck but we are unsure
why a less powerful machine (my laptop) out-performs the more powerful serv
er. The only thing I can think is that the hyper-threading or NT Fiber setti
ng is adversely affecting t
he processors.
Has anyone had any problems with hyperthreading or NT fibers or have any ide
as as to where I might start looking?
Thanks in advance,
DeanDoes the calculation use the same query plan on both systems?
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Dean" <anonymous@.discussions.microsoft.com> wrote in message
news:EE7A4018-48E7-405C-90F2-2CF0FD54F256@.microsoft.com...
> We've got a customer whose system is performing below average. Example, a
billing calculation that takes 13 minutes on my laptop (XP PRO, 2.2 GHz, 1GB
RAM, 4200RPM HD, SQL 2000 SP3) takes 28 minutes on their server (WIN2000,
2-1.5GHz, 2GB RAM, RAID5 array, SQL 2000 SP3). We turned on the performance
monitors recently and watched a bill run. The hard drives and RAM were
barely used while the 4 processor instances (2 physical = 4 logical with
hyperthread) were all consistently above 90%.
> We deduced that the processors were the obvious bottleneck but we are
unsure why a less powerful machine (my laptop) out-performs the more
powerful server. The only thing I can think is that the hyper-threading or
NT Fiber setting is adversely affecting the processors.
> Has anyone had any problems with hyperthreading or NT fibers or have any
ideas as to where I might start looking?
> Thanks in advance,
> Dean
>|||You can't compare these two systems side by side. For instance your laptop
has a faster processor (even though it only has one) and most likely has
more level 1 cache. The disk in the laptop probably has more cache as well
which can make a big difference over a Raid 5. That said you can still be
running into a multi-processor issue but it is hard to say for sure. Try
running the process with MAXDOP set to 1 and see if that makes a difference.
Are you sure the disk queues are fine? Is there just one RAID 5 and is
everything (data, tempdb, logs) on the same array?
Andrew J. Kelly
SQL Server MVP
"Dean" <anonymous@.discussions.microsoft.com> wrote in message
news:EE7A4018-48E7-405C-90F2-2CF0FD54F256@.microsoft.com...
> We've got a customer whose system is performing below average. Example, a
billing calculation that takes 13 minutes on my laptop (XP PRO, 2.2 GHz, 1GB
RAM, 4200RPM HD, SQL 2000 SP3) takes 28 minutes on their server (WIN2000,
2-1.5GHz, 2GB RAM, RAID5 array, SQL 2000 SP3). We turned on the performance
monitors recently and watched a bill run. The hard drives and RAM were
barely used while the 4 processor instances (2 physical = 4 logical with
hyperthread) were all consistently above 90%.
> We deduced that the processors were the obvious bottleneck but we are
unsure why a less powerful machine (my laptop) out-performs the more
powerful server. The only thing I can think is that the hyper-threading or
NT Fiber setting is adversely affecting the processors.
> Has anyone had any problems with hyperthreading or NT fibers or have any
ideas as to where I might start looking?
> Thanks in advance,
> Dean
>|||I expect so. On my laptop I just took a copy of their database and re-ran th
e process.|||We monitored the server with performance counters on the processors, RAM and
drives. Yes, it is one big RAID partition but the monitors showed almost no
activity on the disks and very little on the RAM while the processors were
very high.|||Look at the estimated query execution plan on both systems. Also, check the
disk queue length on our data drives. A long queue length may indicate a
bottleneck at the hardware layer.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Dean" <anonymous@.discussions.microsoft.com> wrote in message
news:601177D1-D700-441B-9607-A15EA29855B4@.microsoft.com...
> I expect so. On my laptop I just took a copy of their database and re-ran
the process.|||I just resolved a performance issue related to the fact that the execution p
lan was different between my development & production.
It was directly related to differences in hardware, specifically the environ
ments you describe. I wouldn't assume that the execution
plans are the same. I'd run an execution plan in both environments to confir
m.
If you find that the execution plans are different (and parallelism is invol
ved), I would suggest looking up the query hint OPTION
(MAXDOP 1) in BOL.
ChrisG|||I would still change the MAXDOP and see what happens.
Andrew J. Kelly
SQL Server MVP
"Dean" <anonymous@.discussions.microsoft.com> wrote in message
news:827EF5AF-5FC0-43FD-BFD5-BD7D7A89C3D8@.microsoft.com...
> We monitored the server with performance counters on the processors, RAM
and drives. Yes, it is one big RAID partition but the monitors showed almost
no activity on the disks and very little on the RAM while the processors
were very high.|||Exactly, I had this issue when we were doing the migration, taking half time
when we turned that off. Is that a Dell machine?
Yes, you should turn that off.|||I'll try it and see what happens...