Showing posts with label text. Show all posts
Showing posts with label text. Show all posts

Wednesday, March 28, 2012

I don't understand this "server timeout"

I was doing update statements in SQL Server 2000.

I have a table with over 16 million rows.

It came from several hundred delimited text files,
and two of the columns are file ID (int) and Line # (int)

Structure is X12 (835). For those unfamiliar with that,
each file has one to many BPR lines; each BPR line has
zero to many CLP lines, each of those has zero to many
SVC lines, each of those has zero to many CAS lines.

Working with this through the Enterprise Manager MMC,
a lot of things I tried got timeouts.

So, I indexed File ID, Line number, and line type, and
created a new table containing only the columns I knew
I would need in the final output--selected fields from
some of the line types mentioned, plus the line numbers
and common file ID for those rows.

I indexed every column in that table that I thought I might
search on.

I loaded it with 31 thousand rows using a select on a
subset of the CAS rows. That took far less than a minute.

I updated each row with the highest BPR line number not higher
than the CASE line number. About a minute. Not bad, with having
the worst case number of comparisons being 16 million times 31 thousand.
Of course, the indexing should help plus it can be narrowed down by
the "same file" and BPR # < CAS # criteria.

But the next update should theoretically be faster: each row now has
a BPR # and a CAS # and I am telling it to find the highest CLP number
BETWEEN those two. So it should have a MUCH smaller set of to search
through. Yet it thinks for about five minutes and then announces a timeout.

Any suggestions?

--
Wes Groleau

Measure with a micrometer, mark with chalk, and cut with an axe.Wes Groleau (groleau+news@.freeshell.org) writes:

Quote:

Originally Posted by

Working with this through the Enterprise Manager MMC,
a lot of things I tried got timeouts.
>
So, I indexed File ID, Line number, and line type, and
created a new table containing only the columns I knew
I would need in the final output--selected fields from
some of the line types mentioned, plus the line numbers
and common file ID for those rows.
>
I indexed every column in that table that I thought I might
search on.
>
I loaded it with 31 thousand rows using a select on a
subset of the CAS rows. That took far less than a minute.
>
I updated each row with the highest BPR line number not higher
than the CASE line number. About a minute. Not bad, with having
the worst case number of comparisons being 16 million times 31 thousand.
Of course, the indexing should help plus it can be narrowed down by
the "same file" and BPR # < CAS # criteria.
>
But the next update should theoretically be faster: each row now has a
BPR # and a CAS # and I am telling it to find the highest CLP number
BETWEEN those two. So it should have a MUCH smaller set of to search
through. Yet it thinks for about five minutes and then announces a
timeout.


Unforunately there is very little here to work from. X12 tells me
nothing. And in any case you have added a number of indexes that are
unknown to me. But let me point out thing: indexing single columns is
far from always sufficient. Often you need composite indexes.

To be able to say something more useful, I would be able to see
the CREATE TABLE statements for the tables. (Or is there only one?),
as well as the indexes, including keys. And of course I would need
to know your UPDATE statements. And if there are any triggers, I
need to see those as well.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

Unforunately there is very little here to work from. X12 tells me
nothing. And in any case you have added a number of indexes that are
unknown to me. But let me point out thing: indexing single columns is
far from always sufficient. Often you need composite indexes.
>
To be able to say something more useful, I would be able to see
the CREATE TABLE statements for the tables. (Or is there only one?),
as well as the indexes, including keys. And of course I would need
to know your UPDATE statements. And if there are any triggers, I
need to see those as well.


No triggers. I tried to script the table (actually I tried to script
a similar table to save myself some typing) but the wizard saved no file
and gave no error message. So I copied the table and used the GUI to
strip out the fields I didn't need/add a few others

All the fields referenced in the UPDATE statements are indexed.

The update statements are almost identical--the difference is that in

UPDATE Output SET xyz = (SELECT Max(Seg_Nbr) FROM Raw_Segs
WHERE Output.FID = Raw_Segs AND Seg_Nbr BETWEEN abc and pqr)

xyz, abc, & pqr are different columns, such that pqr - abc is
a wider range in the one that works; narrower in the one that
dies with timeout.

--
Wes Groleau

A pessimist says the glass is half empty.

An optimist says the glass is half full.

An engineer says somebody made the glass
twice as big as it needed to be.|||Erland Sommarskog wrote:

Quote:

Originally Posted by

To be able to say something more useful, I would be able to see
the CREATE TABLE statements for the tables. (Or is there only one?),
as well as the indexes, including keys. And of course I would need
to know your UPDATE statements. And if there are any triggers, I
need to see those as well.


Sorry for the too-soon send.

As I said, I did the table design with the GUI but it would be equivalent to
( CAS_Seg int,
SVC_Seg int,
CLP_Seg int,
BPR_Seg int,
other fields )

CAS_Seg is loaded first, with an INSERT from a view of Raw_Segs.

Then BPR_Seg is updated with the highest Seg_Nbr lass than CAS_Seg in
the same file. Works.

Then one of the updates I just sent is tried and times out.

--
Wes Groleau
--
"The reason most women would rather have beauty than brains is
they know that most men can see better than they can think."
-- James Dobson|||Wes Groleau (groleau+news@.freeshell.org) writes:

Quote:

Originally Posted by

No triggers. I tried to script the table (actually I tried to script
a similar table to save myself some typing) but the wizard saved no file
and gave no error message. So I copied the table and used the GUI to
strip out the fields I didn't need/add a few others
>
All the fields referenced in the UPDATE statements are indexed.
>
The update statements are almost identical--the difference is that in
>
UPDATE Output SET xyz = (SELECT Max(Seg_Nbr) FROM Raw_Segs
WHERE Output.FID = Raw_Segs AND Seg_Nbr BETWEEN abc and pqr)
>
xyz, abc, & pqr are different columns, such that pqr - abc is
a wider range in the one that works; narrower in the one that
dies with timeout.


Again, CREATE TABLE and CREATE INDEX statements for your two tables
would help. Knowing that "all fields ... are indexed" is not a very
useful piece of information. I would need to know where in the index
the column appears, and which index that is the clustered index, if
there is any.

But you could try:

UPDATE Output
SET xyz = R.maxseg
FROM Output o
JOIN (SELECT Raw_segs, maxseg = Max(Seg_Nbr)
FROM Raw_Segs
WHERE Seg_Nbr BETWEEN abc AND pqr) R
ON Output.FID = R.Raw_Segs

While this syntax is proprietary and not portable, it often yields better
results than a correlated subquery.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

But you could try:
>
UPDATE Output
SET xyz = R.maxseg
FROM Output o
JOIN (SELECT Raw_segs, maxseg = Max(Seg_Nbr)
FROM Raw_Segs
WHERE Seg_Nbr BETWEEN abc AND pqr) R
ON Output.FID = R.Raw_Segs
>
While this syntax is proprietary and not portable, it often yields better
results than a correlated subquery.


I think I also tried that, but maybe not. I'm going in to work today,
so I'll make sure. And I'll make another try at extracting a script
from that thing.

--
Wes Groleau

Expert, n.:
Someone who comes from out of town and shows slides.|||Wes Groleau (groleau+news@.freeshell.org) writes:

Quote:

Originally Posted by

I think I also tried that, but maybe not. I'm going in to work today,
so I'll make sure. And I'll make another try at extracting a script
from that thing.


Note that you can also script from Query Analyzer.

And if only the timeout bothers you, run the UPDATE from QA. QA does not
have any timeouts.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

But you could try:
>
UPDATE Output
SET xyz = R.maxseg
FROM Output o
JOIN (SELECT Raw_segs, maxseg = Max(Seg_Nbr)
FROM Raw_Segs
WHERE Seg_Nbr BETWEEN abc AND pqr) R
ON Output.FID = R.Raw_Segs
>
While this syntax is proprietary and not portable, it often yields better
results than a correlated subquery.


Or maybe

UPDATE Output
SET xyz = R.maxseg
FROM Output o
JOIN (SELECT FID, maxseg = Max(Seg_Nbr)
FROM Raw_Segs
WHERE Seg_Nbr BETWEEN abc AND pqr) R
ON Output.FID = R.FID

But unfortunately, Ent. Mgr/SQL Svr 2000 rejected this, saying that the
optional FROM syntax is not supported. By some experimentation, I got
THAT message to go away (even though both FROMs were still there).

But none of the ten variations I tried were accepted.
(By the way, page 72 and following of SQL Cookbook offers both this
approach and the one that my first approach was based on. But the
syntax it says will work for the first approach was also rejected
by my system--though I managed to alter it enough to work in the one case.

Apparently, the "optimizer" is not very smart. I eventually got the job
done as follows:

Load Raw_Segments

Create and insert Raw_BPR, Raw_CLP, Raw_SVC, Raw_CAS, Patient_Names, etc.

Index the above

Create Inv_Data with indexes

Crate view or table Selected_Adjustments from Raw_CAS

INSERT INTO Inv_Data
(FID, CAS_Seg, Adj_Group, Adj_Reason, Adj_Amount)
SELECT FID, Seg_Nbr, Type, Code, CAST(Amount AS money)
AS Expr1
FROM Selected_Adjustments

UPDATE Inv_Data
SET SVC_Seg = (SELECT MAX(Seg_Nbr)
FROM Raw_SVC AS Raw
WHERE Inv_Data.FID = Raw.FID
AND CAS_Seg Seg_Nbr)

UPDATE Inv_Data
SET Service = (SELECT Elem_01
FROM Raw_SVC AS Raw
WHERE Inv_Data.FID = Raw.FID
AND SVC_Seg Seg_Nbr)

UPDATE Inv_Data
SET CLP_Seg = (SELECT MAX(Seg_Nbr)
FROM Raw_CLP AS Raw
WHERE Inv_Data.FID = Raw.FID
AND CAS_Seg Seg_Nbr)

UPDATE Inv_Data
SET BPR_Seg = (SELECT MAX(Seg_Nbr)
FROM Raw_BPR AS Raw
WHERE Inv_Data.FID = Raw.FID
AND CAS_Seg Seg_Nbr)

Each update takes about ten seconds this way.

No doubt there's a simpler way, but I'm new at this.

--
Wes Groleau

He that complies against his will is of the same opinion still.
-- Samuel Butler, 1612-1680|||Erland Sommarskog wrote:

Quote:

Originally Posted by

And if only the timeout bothers you, run the UPDATE from QA. QA does not
have any timeouts.


How do you do that? When I click the icon that has the tooltip
"execution mode," it acts like it's doing something for a while,
and then it displays an execution plan. But the table is unchanged.

Then I select/copy the SQL, paste it into the Enterprise Manager and
click the exclamation point. It complains of a syntax error.

This happened before--I just forgot to mention it.

Removing the syntax error got the time out.

Maybe the timeout is because in one case (no timeout)
it first gathered part of the subquery (877 rows of 16 million)
making the second part 877 comparing to 31000. But if it
tries the other part first, it is checking the full 31000 rows
against the original 16 million.

I don't know whether that happened, but it would explain
why I did the job in less than sixty seconds by doing some
of the subqueries as separate extract and insert steps.

--
Wes Groleau
Heroes, Heritage, and History
http://freepages.genealogy.rootsweb.com/~wgroleau/|||Wes Groleau (groleau+news@.freeshell.org) writes:

Quote:

Originally Posted by

How do you do that? When I click the icon that has the tooltip
"execution mode,"


You click on the green arrow. (Or press F5 or CTRL/E.) Then it will
run the query which is the window.

I strongly encourage you to get acquianted with Query Analyzer to
run your queries. What you have in Enterprise Manager is a query
designer, and a fairly limited one as testified about the bogus message
about the FROM clause not being supported.

In Query Analyzer you are only limited by what SQL Server permits; the
tool itself does not limit you.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

Wes Groleau (groleau+news@.freeshell.org) writes:

Quote:

Originally Posted by

>How do you do that? When I click the icon that has the tooltip
>"execution mode,"


>
You click on the green arrow. (Or press F5 or CTRL/E.) Then it will
run the query which is the window.


Thanks. I don't recall what the icon was that I clicked,
only that it's tooltip said "execution mode" and that clicking it
caused around ten seconds of "hourglass" but no changes to the
table.

Quote:

Originally Posted by

I strongly encourage you to get acquianted with Query Analyzer to
run your queries. What you have in Enterprise Manager is a query
designer, and a fairly limited one as testified about the bogus message
about the FROM clause not being supported.


Noted. I guess I'd better do so--although I'd rather get
SQL Server 2005, since that's what I went to class for.

--
Wes Groleau
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
^ A UNIX signature isn't a return address, it's the ASCII equivalent ^
^ of a black velvet clown painting. It's a rectangle of carets ^
^ surrounding a quote from a literary giant of weeniedom like ^
^ Heinlein or Dr. Who. ^
^ -- Chris Maeda ^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^|||Wes Groleau (groleau+news@.freeshell.org) writes:

Quote:

Originally Posted by

Quote:

Originally Posted by

>I strongly encourage you to get acquianted with Query Analyzer to
>run your queries. What you have in Enterprise Manager is a query
>designer, and a fairly limited one as testified about the bogus message
>about the FROM clause not being supported.


>
Noted. I guess I'd better do so--although I'd rather get
SQL Server 2005, since that's what I went to class for.


In SQL 2005, Enterprise Manager and Query Analyzer are both replaced
with SQL Server Management Studio. The Query Designer is still there,
and is still very limited, and you still do best to avoid it. Running
queries from the Query Editor is the way to go.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

In SQL 2005, Enterprise Manager and Query Analyzer are both replaced
with SQL Server Management Studio. The Query Designer is still there,
and is still very limited, and you still do best to avoid it. Running
queries from the Query Editor is the way to go.


Thanks for the tip. I'm saving it in case they ever actually
give us 2005.

We also have Oracle, but I don't have access to it.

--
Wes Groleau
Alive and Well
http://freepages.religions.rootsweb.com/~wgroleau/|||I STILL don't understand the server timeout.

Erland Sommarskog wrote:

Quote:

Originally Posted by

And if only the timeout bothers you, run the UPDATE from QA. QA does not
have any timeouts.


Surprise! I went back to QA with your JOIN method. It would not accept
it, nor various modifications of it.

I ended up using more UPDATE queries similar to the ones I posted
before. The last two are particularly interesting. Each updated three
columns in Inv_Data. Each had the exact same structure except for the
WHERE clause. One came from table X, one from table Y (I forget the
exact names, but they don't matter).

ALL referenced columns in X and Y and Inv_Data are indexed in the same
manner (not clustered) except for the three fields being changed.

X and Y have the same number of rows and the fields used to select are
the same type and name. (And they have a small fraction of the number
of rows that did not cause a timeout in an earlier similar update)

The query that ended WHERE X.Seg_Nbr Inv_Data.CAS_Seg completed
in under ten seconds.

The one that ended
WHERE X.Seg_Nbr BETWEEN Inv_Data.CLP_Seg AND Inv_Data.CAS_Seg
got a timeout from the server after about a minute.
(So perhaps QA doesn't timeout, but the server still does!)

The "estimated execution plans" were the same.

Changing the BETWEEN to the equivalent < CAS and CLP didn't help.

Exiting QA and EM, defragmenting the drive, logging out and back in
and opening only QA -- didn't help.

I am now trying to do the work one input file at a time and then
insert the end result in the desired output. This way X and Y will
have 800-1500 rows instead of over 400K (but 16 MILLION did not get
a timeout!)

Curiouser and curiouser...

--
Wes Groleau
Heroes, Heritage, and History
http://freepages.genealogy.rootsweb.com/~wgroleau/|||Wes Groleau (groleau+news@.freeshell.org) writes:

Quote:

Originally Posted by

Erland Sommarskog wrote:

Quote:

Originally Posted by

>And if only the timeout bothers you, run the UPDATE from QA. QA does not
>have any timeouts.


>
Surprise! I went back to QA with your JOIN method. It would not accept
it, nor various modifications of it.


Well, since you never care about posting your table defitions or sample
data, do you really expect me to post a tested working query? Maybe you
could at least post the actual query you tried, and the error message?

I am sorry if my tone is somewhat irritated, but it is very difficult
to assist when the person asking for help refuse to dislose vital
information.

Quote:

Originally Posted by

The one that ended
WHERE X.Seg_Nbr BETWEEN Inv_Data.CLP_Seg AND Inv_Data.CAS_Seg
got a timeout from the server after about a minute.
(So perhaps QA doesn't timeout, but the server still does!)


Again, can you post the actual error message, and the exact query you are
using? If you get a timeout, it may be that you are using a linked server.
(Not that you ever said that you are using linked server, but at this
point I'm starting to feel like a participant in a quiz competition.)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

Again, can you post the actual error message, and the exact query you are
using? If you get a timeout, it may be that you are using a linked server.


Not sure what is meant by "linked" but when I start QA,
I leave the server choice on "(local)"

I will see whether I can capture some of this information.

--
Wes Groleau

Answer not a fool according to his folly,
lest thou also be like unto him.
Answer a fool according to his folly,
lest he be wise according to his own conceit.
-- Solomon

Are you saying there's no good way to answer a fool?
-- Groleau|||Erland Sommarskog wrote:

Quote:

Originally Posted by

Well, since you never care about posting your table defitions or sample
data, do you really expect me to post a tested working query? Maybe you
could at least post the actual query you tried, and the error message?


I'm not trying to be obscure--it's just that (1) I don't have access
to that system and Usenet at the same time and place and (2) the table
definitions were done with the GUI and (3) I couldn't get the scripting
to work. (I've done it before, but this time it asks for a filename and
then doesn't create the file)

Quote:

Originally Posted by

Again, can you post the actual error message, and the exact query you are


Here is what I thought I had already posted, but I have added more
explanation to it since then:
Load Raw_Segments

Raw_Segments
( FID int,
Seg_Nbr int,
SegType varchar(5),
Elem_01 varchar(x),
Elem_02 varchar(y), x, y, ..., z are between 10 and 50
...
Elem_nn varchar(z) )

FID, Seg_Nbr, SegType are in the same nonclustered index

Raw_Segments was loaded from one humongous file by DTS.

Create and insert Raw_BPR, Raw_CLP, Raw_SVC, Raw_CAS, Patient_Names, etc.

I forget how I created these but each has the same structure/definition
as Raw_Segments but a subset of the data. BPR, CLP, SVC, CAS are
SegTypes and Patient_Names is all the segments where SegType = 'NM1'
and Elem_01 =s 'QC'

Index the above the same as Raw_Segments

Create Inv_Data with indexes

From memory, probably
Inv_Data
( FID int, -- A
CAS_Seg int, -- B
Adj_Group varchar, -- C
Adj_Reason varchar, -- D
Adj_Amount money, -- E
SVC_Seg int, -- F
Service varchar, -- G
Name varchar, -- H
CLP_Seg int, -- I
Claim varchar, -- J
Status varchar, -- K
Charges money, -- L
BPR_Seg int, -- M
RA_Date date, -- N
Check_Amount money, -- O
Provider varchar ) -- P

A,B,D,F,H,I,J,M,P are indexed

Create View Selected_Adjustments from Raw_CAS

INSERT INTO Raw_CAS SELECT * FROM Raw_Segments WHERE SegType = 'CAS'

A CAS Segment can have up to six adjustments in different columns in the
same row so Selected_Adjustments is a view giving the union of
six selects to map them all into one set per row. Plus each has a
filter of Code in ('22', 'B22', '12', '50')

Raw_? is like Raw_CAS

INSERT INTO Inv_Data -- which is empty until this happens
(FID, CAS_Seg, Adj_Group, Adj_Reason, Adj_Amount)
SELECT FID, Seg_Nbr, Type, Code, CAST(Amount AS money)
AS Expr1
FROM Selected_Adjustments -- this worked

-- Of the following updates, some worked and some timed out:
UPDATE Inv_Data
SET SVC_Seg = (SELECT MAX(Seg_Nbr)
FROM Raw_SVC AS Raw
WHERE Inv_Data.FID = Raw.FID
AND Inv_Data.CAS_Seg Raw.Seg_Nbr)

UPDATE Inv_Data
SET Service = (SELECT Elem_01
FROM Raw_SVC AS Raw
WHERE Inv_Data.FID = Raw.FID
AND SVC_Seg = Seg_Nbr)

DELETE
FROM Inv_Data AS I
WHERE Adj_Reason = '50'
AND Service not like '%GA%'

UPDATE Inv_Data
SET CLP_Seg = (SELECT MAX(Seg_Nbr)
FROM Raw_CLP AS Raw
WHERE Inv_Data.FID = Raw.FID
AND CAS_Seg Seg_Nbr)

UPDATE Inv_Data
SET BPR_Seg = (SELECT MAX(Seg_Nbr)
FROM Raw_BPR AS Raw
WHERE Inv_Data.FID = Raw.FID
AND CAS_Seg Seg_Nbr)

UPDATE Inv_Data
SET (fill in all the missing fields from Raw_whatever)

--
Wes Groleau

Change is inevitable. We need to learn that "inevitable" is
neither a synonym for "good" nor for "bad."
-- WWG|||Wes Groleau (groleau+news@.freeshell.org) writes:

Quote:

Originally Posted by

I'm not trying to be obscure--it's just that (1) I don't have access
to that system and Usenet at the same time and place and (2) the table
definitions were done with the GUI and (3) I couldn't get the scripting
to work. (I've done it before, but this time it asks for a filename and
then doesn't create the file)


As I've said you can script from Query Analyzer as well.

And I am sorry, but I don't feel inclined to come with further
guesses without the error messages you are getting. Nor am I interested
in composing queries and then only hear "it wouldn't take them". I
understand that it may be difficult for you to bring the information
from one corner to another, but it is even more difficult for me
who don't even see the queries. And after all, copying an error message
from one corner of a room to another is not rocket science. Pen and
paper still works...

Quote:

Originally Posted by

>Not sure what is meant by "linked" but when I start QA,
>I leave the server choice on "(local)"


A linked server is a remote data source, which is accessed from SQL
Server in a distributed query. The linked server can be another SQL
Server, but can also be an Access database, Oracle database, Active
directory or anything else for which there is an ODBC driver or OLE DB
provider.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

And I am sorry, but I don't feel inclined to come with further
guesses without the error messages you are getting. Nor am I interested
in composing queries and then only hear "it wouldn't take them". I


I understand. What you did offer is appreciated.

Quote:

Originally Posted by

A linked server is a remote data source, which is accessed from SQL
Server in a distributed query. The linked server can be another SQL
Server, but can also be an Access database, Oracle database, Active
directory or anything else for which there is an ODBC driver or OLE DB
provider.


OK, then (as I suspected) it is not a linked server.

--
Wes Groleau

Nobody believes a theoretical analysis -- except the guy who did it.
Everybody believes an experimental analysis -- except the guy who
did it.
-- Unknown|||Wes Groleau (groleau+news@.freeshell.org) writes:

Quote:

Originally Posted by

Erland Sommarskog wrote:

Quote:

Originally Posted by

>And I am sorry, but I don't feel inclined to come with further
>guesses without the error messages you are getting. Nor am I interested
>in composing queries and then only hear "it wouldn't take them". I


>
I understand. What you did offer is appreciated.


Permit me to make the final remark, that I'm only asking for error
messages and that for my own sake. If you want help with your problems,
there is no reason why you shouldn't make the effort to transfer the
information to the newsgroup.

But if you don't want to make that effort to get your issues resovled -
well that's your call.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

Permit me to make the final remark, that I'm only asking for error
messages and that for my own sake. If you want help with your problems,
there is no reason why you shouldn't make the effort to transfer the
information to the newsgroup.


Obviously no one here is obligated to help, so I can't complain
if I get even a crumb. I did not preserve any of the error messages,
as I am under pressure to show progress. I got the thing working
in Access with a subset of the files. But there is NO WAY Access
will handle the quantity of data that is in the full set of files.

So I will have to go back to SQL Server, and when that happens,
I will try to capture any error messages. If there are any.

Thanks for your kindness--and that of the few others who contributed.

--
Wes Groleau

Even if you do learn to speak correct English,
whom are you going to speak it to?
-- Clarence Darrow

Friday, March 23, 2012

I can't get a SQL QUery to accept a null for a parameter

I have straight forward Insert Query, which takes values mainly from text boxes, however I am having trouble when the value is null.

eg
cmdP0.Parameters["@.EmpID"].Value=txtEmpNo.Text;

When I run the Query I get...
Message="Parameterized Query '(@.ID int,@.EmpID int,@.Photo nvarchar(260),@.DoB smalldatetime,@.Med' expects parameter @.EmpID, which was not supplied."

I also get this when the parameter can be a string.
I have set the parameter properties so that SourceColumnNullMaping to true (it was set as false - so hoped this might fix it!)

No luck.

Have you tried to set DBNull.Value as value of the parameter, if the text is returned as null?|||

That worked for the strings it took a bit of effort forn integers.

Thanks to someone eles code, which I don't quite understand...

string inputGap = txtGap.Text;
Int32? searchGap = null;
if(!string.IsNullOrEmpty(inputGap)) {
Int32 Gap;
if(Int32.TryParse(inputGap , out Gap))
searchGap = Gap;
}
if(searchGap!=null) {
cmdP0.Parameters["@.MedicalGap"].Value=searchGap;
} else {
cmdP0.Parameters["@.MedicalGap"].Value=DBNull.Value;}

I don't know what Int32? searchGap = null; does

|||

Int32 is a struct (a value type). That means it can't be null. .NET 2.0 adds so called Nullable Types. That means you can set an Int32 null, by using a special syntax. Int32? (with the question mark) means that this is a nullable. It means that this variable can be set to null.

This piece of code is to complex. You can make it a lot easier by doing this:

int value = 0;
if (int.TryParse(txtGap.Text, out value))
cmdP0.Parameters["@.MedicalGap"].Value = value;
else
cmdP0.Parameters["@.MedicalGap"].Value = DBNull.Value;

|||

Thank you

A nice clear explanation and some nice code

Much appreciated, this has been driving me nuts!

Monday, March 19, 2012

I can solve this error

Invalid object name "Northwind" in this line of code:
Text1.text = cn.Executescalar()

We are going to need more information that that. It seems likeyour connection string is trying to connect to a non-existent Northwinddatabase.

Monday, March 12, 2012

I can not enter non english text in SQL express...

I am trying to add non English text in a table in my sql express but when it refreshes the entry returns unrecorgizable characters(@.#$@.)

Haw may I configure sqlexpress languages or is there something else I should know?

Many thanks in advance

Are your column types nchar/nvarchar/ntext or are they just char/varchar/text?

In order to use international strings, you must store the data with wide character types.

|||

Hello,

I had no idea about it... I will change the field type

Thanks

Friday, March 9, 2012

I am not sure whether I can use sql mobile.

Hi,

There are many text records need to be storeaged in my application.

I am new guy to Sql Mobile.

If I choose sql mobile to save my data. Will my app be more complex to deploy ?

Before I use Sql Mobile on device(wm5), need I install it first?

any other advice?

Best regards

PWang

SQL Mobile can store your textual data if your mobile application is constructed with Visual Studio 2005. You will have to include the SQL Mobile CAB files along with your application and install them on device in order to use this database engine. Yes, you have to install SQL Mobile on WM5 devices - it is not included in ROM.

You might want to read through some of the introductory information on the SQL Mobile Developer's Center on MSDN.

http://msdn.microsoft.com/sql/mobile/default.aspx

Darren

I am Having an Odd Problem with Exporting to Excel.

The background:
Before I get into my error let me give you some background in what I am
required to do. All the text field parameters must be stored in the
database. What I mean by this is every text box's value, background
colour, font(weight, size and etc), TextAlign and TextDecoration
information is stored in the database. This includes the fields for
headers and footers. To get these parameters to work properly in
headers and footers I have set up report parameters that get the data
from the database and then feed it to the text fields inside the header
or footer.
The problem:
I am currently getting the following error while trying to export to
Excel:
--
Microsoft Internet Explorer
--
Internet Explorer cannot download Format=EXCEL from SERVERNAME.
Internet Explorer was not able to open this Internet site. The
requested site is either unavailable or cannot be found. Please try
again later.
--
OK
--
After some investigation I discovered that the appearance property
"TextAlign" for all footers is causing this error. So, if I put in
"Left", "Right" or "Center" text directly into the
"TextAlign" field for all the footer text boxes the report exports
into Excel properly. But if I use the property from the database it
will not load into Excel. I've checked to make sure the database
field is giving a valid value. In fact, I've tried all 3 values and
had no luck. I've also tried other TextAlign parameters from
different text fields on the report and received the same error as
stated above.
What I find odd about this is the fact it works fine for all the other
properties but TextAlign. Could this be an obscure bug?
Another thing to note is in Excel I can see the header all the time;
but, I can only see the footer when I am in print preview. Is there a
way to have the footer set up just like the header; so, it is just on
bottom of the page instead of being a "true" footer. It wouldn't
be a pretty way to solve my issue but I think it would at least allow
my clients to use Excel export for now.Oh I am using SQL Server 2000 Reporing Services with SP2.|||Hello!
I have encountered the same problem with export to Excel ("Internet Explorer
cannot download Format=EXCEL from SERVERNAME."). In my case the solution was
simple. I've had just to make the name of the report shorter. You can simply
upload your rdl under another (shorter) name to report manager and it will
possibly start working.
"Jeremy Balliston" wrote:
> Oh I am using SQL Server 2000 Reporing Services with SP2.
>|||Any ideas?

I am getting tempdb full error again

This is a multi-part message in MIME format.
--=_NextPart_000_02E8_01C47BAE.4B516D10
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
This is the error that I am getting in the error log
The log file for database 'tempdb' is full. Back up the transaction log = for the database to free up some log space..
Error: 9002, Severity: 17, State: 6
This is the information of my database server.
exec sp_helpdb tempdb
name,db_size,owner,dbid,created,status,compatibility_level
tempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,
Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, Recovery=3DSIMPLE, = Version=3D539, Collation=3DSQL_Latin1_General_CP1_CI_AS, = SQLSortOrder=3D52, IsAutoCreateStatistics, IsAutoUpdateStatistics,80
exec sp_spaceused
name,fileid,filename,filegroup,size,maxsize,growth,usage
tempdev,1,C:\Program Files\Microsoft SQL = Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data only
templog,2,C:\Program Files\Microsoft SQL = Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log only
database_name,database_size,unallocated space
tempdb,89.06 MB,87.66 MB
reserved,data,index_size,unused
672 KB,184 KB,400 KB,88 KB
exec master..xp_fixeddrives
drive,MB free
C,36715
D,371955
Please help. Where is the issue. I can manually increase the size of = the tempdb and so there is no permission issue.
-Nags
--=_NextPart_000_02E8_01C47BAE.4B516D10
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&
This is the error that I am getting in = the error log

The log file for database 'tempdb' is = full. Back up the transaction log for the database to free up some log = space..
Error: 9002, Severity: 17, State: = 6

This is the information of my database server.

exec sp_helpdb tempdbname,db_size,owner,dbid,created,status,compatibility_l= eveltempdb, 89.06 MB,sa,2,Aug 2 = 2004,Status=3DONLINE,Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, Recovery=3DSIMPLE, Version=3D539, Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, = IsAutoCreateStatistics, IsAutoUpdateStatistics,80exec sp_spaceusedname,fileid,filename,filegroup,size,maxsize,grow= th,usagetempdev,1,C:\Program Files\Microsoft SQL Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data onlytemplog,2,C:\Program Files\Microsoft SQL Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log onlydatabase_name,database_size,unallocated = spacetempdb,89.06 MB,87.66 MBreserved,data,index_size,unused672 KB,184 = KB,400 KB,88 KBexec master..xp_fixeddrivesdrive,MB = freeC,36715D,371955
Please help. Where is the = issue. I can manually increase the size of the tempdb and so there is no permission issue.

-Nags

--=_NextPart_000_02E8_01C47BAE.4B516D10--This is a multi-part message in MIME format.
--=_NextPart_000_008A_01C47BBD.A9F66870
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Why don't you just make tempdb larger than you need and forget about =this issue. You only have it at 89MB and the log at less than a MB. =Disk space is way too cheap these days to deal with issues like this. =Make it bigger and move on.
-- Andrew J. Kelly SQL MVP

"Nags" <nags@.DontSpamMe.com> wrote in message =news:edPDW$8eEHA.1652@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting in the error log
The log file for database 'tempdb' is full. Back up the transaction =log for the database to free up some log space..
Error: 9002, Severity: 17, State: 6
This is the information of my database server.
exec sp_helpdb tempdb
name,db_size,owner,dbid,created,status,compatibility_level
tempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,
Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, =Recovery=3DSIMPLE, Version=3D539, =Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, =IsAutoCreateStatistics, IsAutoUpdateStatistics,80
exec sp_spaceused
name,fileid,filename,filegroup,size,maxsize,growth,usage
tempdev,1,C:\Program Files\Microsoft SQL =Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data only
templog,2,C:\Program Files\Microsoft SQL =Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log only
database_name,database_size,unallocated space
tempdb,89.06 MB,87.66 MB
reserved,data,index_size,unused
672 KB,184 KB,400 KB,88 KB
exec master..xp_fixeddrives
drive,MB free
C,36715
D,371955
Please help. Where is the issue. I can manually increase the size of =the tempdb and so there is no permission issue.
-Nags
--=_NextPart_000_008A_01C47BBD.A9F66870
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Why don't you just make tempdb larger =than you need and forget about this issue. You only have it at 89MB and the log =at less than a MB. Disk space is way too cheap these days to deal with =issues like this. Make it bigger and move on.
-- Andrew J. Kelly SQL MVP
"Nags" wrote =in message news:edPDW$8eEHA.1652=@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting =in the error log

The log file for database 'tempdb' is =full. Back up the transaction log for the database to free up some log space..
Error: 9002, Severity: 17, State: =6

This is the information of my =database server.

exec sp_helpdb =tempdbname,db_size,owner,dbid,created,status,compatibility_l=eveltempdb, 89.06 MB,sa,2,Aug 2 =2004,Status=3DONLINE,Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, Recovery=3DSIMPLE, Version=3D539, Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, IsAutoCreateStatistics, IsAutoUpdateStatistics,80exec =sp_spaceusedname,fileid,filename,filegroup,size,maxsize,grow=th,usagetempdev,1,C:\Program Files\Microsoft SQL Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data onlytemplog,2,C:\Program Files\Microsoft SQL = Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log onlydatabase_name,database_size,unallocated =spacetempdb,89.06 MB,87.66 MBreserved,data,index_size,unused672 KB,184 =KB,400 KB,88 KBexec =master..xp_fixeddrivesdrive,MB freeC,36715D,371955
Please help. Where is the =issue. I can manually increase the size of the tempdb and so there is no =permission issue.

-Nags


--=_NextPart_000_008A_01C47BBD.A9F66870--|||This is a multi-part message in MIME format.
--=_NextPart_000_003B_01C47BBD.C2ADEC30
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Nags,
This probably means that you are holding an open transaction across =tempdb and the log is growing and growing and growing. Even though the =recovery mode is SIMPLE there is a need for log space during a =transaction.
First: It looks like you have plenty of disk space, but have you =verified that when the log file is 'full' that there is still free space =on the drive? Or is the space indeed used up? If it is, then you have =some large transaction running. (You can use DBCC OPENTRAN to report =on the oldest open transaction in a database.)
Next: Your tempdb log is 796 KB and growing at 10%. Possibly your log =usage is simply increasing faster than the 10% increments can be made =and you get a false "log full". Try setting the tempdb log to be 10 MB =with a 5MB increment and see if that helps.
Both of these and other possibities are described in:
http://support.microsoft.com/default.aspx?scid=3Dkb;EN-US;317375
Russell Fields
"Nags" <nags@.DontSpamMe.com> wrote in message =news:edPDW$8eEHA.1652@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting in the error log
The log file for database 'tempdb' is full. Back up the transaction =log for the database to free up some log space..
Error: 9002, Severity: 17, State: 6
This is the information of my database server.
exec sp_helpdb tempdb
name,db_size,owner,dbid,created,status,compatibility_level
tempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,
Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, =Recovery=3DSIMPLE, Version=3D539, =Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, =IsAutoCreateStatistics, IsAutoUpdateStatistics,80
exec sp_spaceused
name,fileid,filename,filegroup,size,maxsize,growth,usage
tempdev,1,C:\Program Files\Microsoft SQL =Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data only
templog,2,C:\Program Files\Microsoft SQL =Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log only
database_name,database_size,unallocated space
tempdb,89.06 MB,87.66 MB
reserved,data,index_size,unused
672 KB,184 KB,400 KB,88 KB
exec master..xp_fixeddrives
drive,MB free
C,36715
D,371955
Please help. Where is the issue. I can manually increase the size of =the tempdb and so there is no permission issue.
-Nags
--=_NextPart_000_003B_01C47BBD.C2ADEC30
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Nags,
This probably means that you are =holding an open transaction across tempdb and the log is growing and growing and growing. Even though the recovery mode is SIMPLE there is a =need for log space during a transaction.
First: It looks like you have plenty of =disk space, but have you verified that when the log file is 'full' that there is =still free space on the drive? Or is the space indeed used up? If it =is, then you have some large transaction running. (You can use DBCC =OPENTRAN to report on the oldest open transaction in a database.)
Next: Your tempdb log is 796 KB =and growing at 10%. Possibly your log usage is simply increasing faster than =the 10% increments can be made and you get a false "log full". Try setting =the tempdb log to be 10 MB with a 5MB increment and see if that =helps.
Both of these and other possibities are =described in:

Russell Fields
"Nags" wrote in message news:edPDW$8eEHA.1652=@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting =in the error log

The log file for database 'tempdb' is =full. Back up the transaction log for the database to free up some log space..
Error: 9002, Severity: 17, State: =6

This is the information of my =database server.

exec sp_helpdb =tempdbname,db_size,owner,dbid,created,status,compatibility_l=eveltempdb, 89.06 MB,sa,2,Aug 2 =2004,Status=3DONLINE,Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, Recovery=3DSIMPLE, Version=3D539, Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, IsAutoCreateStatistics, IsAutoUpdateStatistics,80exec =sp_spaceusedname,fileid,filename,filegroup,size,maxsize,grow=th,usagetempdev,1,C:\Program Files\Microsoft SQL Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data onlytemplog,2,C:\Program Files\Microsoft SQL = Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log onlydatabase_name,database_size,unallocated =spacetempdb,89.06 MB,87.66 MBreserved,data,index_size,unused672 KB,184 =KB,400 KB,88 KBexec =master..xp_fixeddrivesdrive,MB freeC,36715D,371955
Please help. Where is the =issue. I can manually increase the size of the tempdb and so there is no =permission issue.

-Nags


--=_NextPart_000_003B_01C47BBD.C2ADEC30--|||This is a multi-part message in MIME format.
--=_NextPart_000_034F_01C47BBF.C0C3BD80
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Yes, I have verified, and there is about 33 Gig free. DBCC OPENTRAN =shows that there are no open transactions.
I did follow your suggestion of having the log for the tempdb to be 10 =MB and with a 5 MB increment. Let me see if I am going to get the same =error.
More Info : All the databases were moved from an old server to this new =server. We have about 20 such servers and almost similar processing =being done on all the servers. We never saw such an error. We are =getting this only on the new server that we built. That is why I am =trying to do further research as we are going to upgrade our production =server and I want to be sure that we do not run into such issues there. =I am more worried and have a feeling that it might be a configuration =issue and I am not sure where to look into.
-Nags
"Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message =news:uXE7O99eEHA.3428@.TK2MSFTNGP11.phx.gbl...
Nags,
This probably means that you are holding an open transaction across =tempdb and the log is growing and growing and growing. Even though the =recovery mode is SIMPLE there is a need for log space during a =transaction.
First: It looks like you have plenty of disk space, but have you =verified that when the log file is 'full' that there is still free space =on the drive? Or is the space indeed used up? If it is, then you have =some large transaction running. (You can use DBCC OPENTRAN to report =on the oldest open transaction in a database.)
Next: Your tempdb log is 796 KB and growing at 10%. Possibly your =log usage is simply increasing faster than the 10% increments can be =made and you get a false "log full". Try setting the tempdb log to be =10 MB with a 5MB increment and see if that helps.
Both of these and other possibities are described in:
http://support.microsoft.com/default.aspx?scid=3Dkb;EN-US;317375
Russell Fields
"Nags" <nags@.DontSpamMe.com> wrote in message =news:edPDW$8eEHA.1652@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting in the error log
The log file for database 'tempdb' is full. Back up the transaction =log for the database to free up some log space..
Error: 9002, Severity: 17, State: 6
This is the information of my database server.
exec sp_helpdb tempdb
name,db_size,owner,dbid,created,status,compatibility_level
tempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,
Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, =Recovery=3DSIMPLE, Version=3D539, =Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, =IsAutoCreateStatistics, IsAutoUpdateStatistics,80
exec sp_spaceused
name,fileid,filename,filegroup,size,maxsize,growth,usage
tempdev,1,C:\Program Files\Microsoft SQL =Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data only
templog,2,C:\Program Files\Microsoft SQL =Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log only
database_name,database_size,unallocated space
tempdb,89.06 MB,87.66 MB
reserved,data,index_size,unused
672 KB,184 KB,400 KB,88 KB
exec master..xp_fixeddrives
drive,MB free
C,36715
D,371955
Please help. Where is the issue. I can manually increase the size =of the tempdb and so there is no permission issue.
-Nags
--=_NextPart_000_034F_01C47BBF.C0C3BD80
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Yes, I have verified, and there is =about 33 Gig free. DBCC OPENTRAN shows that there are no open transactions.
I did follow your suggestion of having =the log for the tempdb to be 10 MB and with a 5 MB increment. Let me see if I =am going to get the same error.
More Info : All the databases were =moved from an old server to this new server. We have about 20 such servers and =almost similar processing being done on all the servers. We never saw =such an error. We are getting this only on the new server that we =built. That is why I am trying to do further research as we are going to =upgrade our production server and I want to be sure that we do not run into such =issues there. I am more worried and have a feeling that it might be a configuration issue and I am not sure where to look into.
-Nags
"Russell Fields" wrote in message news:uXE7O99eEHA.3428=@.TK2MSFTNGP11.phx.gbl...
Nags,

This probably means that you are =holding an open transaction across tempdb and the log is growing and growing and growing. Even though the recovery mode is SIMPLE there is =a need for log space during a transaction.

First: It looks like you have plenty =of disk space, but have you verified that when the log file is 'full' that =there is still free space on the drive? Or is the space indeed used =up? If it is, then you have some large transaction running. (You =can use DBCC OPENTRAN to report on the oldest open transaction in a database.)

Next: Your tempdb log is 796 KB =and growing at 10%. Possibly your log usage is simply increasing faster than =the 10% increments can be made and you get a false "log full". Try =setting the tempdb log to be 10 MB with a 5MB increment and see if that helps.

Both of these and other possibities =are described in:


Russell Fields

"Nags" wrote =in message news:edPDW$8eEHA.1652=@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting =in the error log

The log file for database 'tempdb' =is full. Back up the transaction log for the database to free up some log space..
Error: 9002, Severity: 17, State: 6

This is the information of my =database server.

exec sp_helpdb =tempdbname,db_size,owner,dbid,created,status,compatibility_l=eveltempdb, 89.06 MB,sa,2,Aug 2 =2004,Status=3DONLINE,Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, Recovery=3DSIMPLE, Version=3D539, Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, IsAutoCreateStatistics, =IsAutoUpdateStatistics,80exec =sp_spaceusedname,fileid,filename,filegroup,size,maxsize,grow=th,usagetempdev,1,C:\Program Files\Microsoft SQL Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data onlytemplog,2,C:\Program Files\Microsoft =SQL Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log onlydatabase_name,database_size,unallocated =spacetempdb,89.06 MB,87.66 MBreserved,data,index_size,unused672 KB,184 =KB,400 KB,88 KBexec =master..xp_fixeddrivesdrive,MB freeC,36715D,371955
Please help. Where is the =issue. I can manually increase the size of the tempdb and so there is no =permission issue.

-Nags


--=_NextPart_000_034F_01C47BBF.C0C3BD80--|||This is a multi-part message in MIME format.
--=_NextPart_000_035C_01C47BC0.2C931330
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
I cannot do that. This is a new server that we built and this could be =a configuration issue. We are in the process of upgrading our =production server and what if similar problem occurs on production. I =can allocate about 2 Gig for temp db and 2 gig for temp log.. and one =day a huge load comes on the server (which we are expecting in next few =months).. we will get the same error. I cannot afford this on =production.
-Nags
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message =news:eD$X289eEHA.2028@.tk2msftngp13.phx.gbl...
Why don't you just make tempdb larger than you need and forget about =this issue. You only have it at 89MB and the log at less than a MB. =Disk space is way too cheap these days to deal with issues like this. =Make it bigger and move on.
-- Andrew J. Kelly SQL MVP

"Nags" <nags@.DontSpamMe.com> wrote in message =news:edPDW$8eEHA.1652@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting in the error log
The log file for database 'tempdb' is full. Back up the transaction =log for the database to free up some log space..
Error: 9002, Severity: 17, State: 6
This is the information of my database server.
exec sp_helpdb tempdb
name,db_size,owner,dbid,created,status,compatibility_level
tempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,
Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, =Recovery=3DSIMPLE, Version=3D539, =Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, =IsAutoCreateStatistics, IsAutoUpdateStatistics,80
exec sp_spaceused
name,fileid,filename,filegroup,size,maxsize,growth,usage
tempdev,1,C:\Program Files\Microsoft SQL =Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data only
templog,2,C:\Program Files\Microsoft SQL =Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log only
database_name,database_size,unallocated space
tempdb,89.06 MB,87.66 MB
reserved,data,index_size,unused
672 KB,184 KB,400 KB,88 KB
exec master..xp_fixeddrives
drive,MB free
C,36715
D,371955
Please help. Where is the issue. I can manually increase the size =of the tempdb and so there is no permission issue.
-Nags
--=_NextPart_000_035C_01C47BC0.2C931330
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

I cannot do that. This is a new =server that we built and this could be a configuration issue. We are in the =process of upgrading our production server and what if similar problem occurs on production. I can allocate about 2 Gig for temp db and 2 gig for =temp log.. and one day a huge load comes on the server (which we are =expecting in next few months).. we will get the same error. I cannot afford =this on production.
-Nags
"Andrew J. Kelly" wrote in message news:eD$X289eEHA.2028=@.tk2msftngp13.phx.gbl...
Why don't you just make tempdb larger =than you need and forget about this issue. You only have it at 89MB and =the log at less than a MB. Disk space is way too cheap these days to =deal with issues like this. Make it bigger and move on.
-- Andrew J. Kelly SQL MVP
"Nags" =wrote in message news:edPDW$8eEHA.1652=@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting =in the error log

The log file for database 'tempdb' =is full. Back up the transaction log for the database to free up some log space..
Error: 9002, Severity: 17, State: 6

This is the information of my =database server.

exec sp_helpdb =tempdbname,db_size,owner,dbid,created,status,compatibility_l=eveltempdb, 89.06 MB,sa,2,Aug 2 =2004,Status=3DONLINE,Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, Recovery=3DSIMPLE, Version=3D539, Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, IsAutoCreateStatistics, =IsAutoUpdateStatistics,80exec =sp_spaceusedname,fileid,filename,filegroup,size,maxsize,grow=th,usagetempdev,1,C:\Program Files\Microsoft SQL Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data onlytemplog,2,C:\Program Files\Microsoft =SQL Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log onlydatabase_name,database_size,unallocated =spacetempdb,89.06 MB,87.66 MBreserved,data,index_size,unused672 KB,184 =KB,400 KB,88 KBexec =master..xp_fixeddrivesdrive,MB freeC,36715D,371955
Please help. Where is the =issue. I can manually increase the size of the tempdb and so there is no =permission issue.

-Nags


--=_NextPart_000_035C_01C47BC0.2C931330--|||This is a multi-part message in MIME format.
--=_NextPart_000_002D_01C47BC2.E1462900
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
This error basically comes about when the log needs to autogrow and it =can't do it fast enough. There still is no excuse for having the tempdb =files that small. Sure this situation may come up at any time =regardless of the size if the conditions are wrong but you are putting =yourself in a position for this to happen right away. Any time you can =do something proactively to avoid an issue you should do it. The other =thing is that it sounds like your hardware is not able to keep up with =the autogrow request. Growing is a very resource intensive process and =if the hardware (CPU, Disks etc) are busy or inadequate you can get this =condition. Make sure you don't have high disk or cpu queues. Also make =sure the autogrowth size is only at a point where it can keep up with =the hardware. By this I mean you don't want to autogrow at 10% if you =have a 10GB file and slow disks. Make it a size in MB that it can =easily grow with little effort.
-- Andrew J. Kelly SQL MVP

"Nags" <nags@.DontSpamMe.com> wrote in message =news:eAH$0G%23eEHA.3520@.TK2MSFTNGP10.phx.gbl...
I cannot do that. This is a new server that we built and this could =be a configuration issue. We are in the process of upgrading our =production server and what if similar problem occurs on production. I =can allocate about 2 Gig for temp db and 2 gig for temp log.. and one =day a huge load comes on the server (which we are expecting in next few =months).. we will get the same error. I cannot afford this on =production.
-Nags
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message =news:eD$X289eEHA.2028@.tk2msftngp13.phx.gbl...
Why don't you just make tempdb larger than you need and forget about =this issue. You only have it at 89MB and the log at less than a MB. =Disk space is way too cheap these days to deal with issues like this. =Make it bigger and move on.
-- Andrew J. Kelly SQL MVP

"Nags" <nags@.DontSpamMe.com> wrote in message =news:edPDW$8eEHA.1652@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting in the error log
The log file for database 'tempdb' is full. Back up the =transaction log for the database to free up some log space..
Error: 9002, Severity: 17, State: 6
This is the information of my database server.
exec sp_helpdb tempdb
name,db_size,owner,dbid,created,status,compatibility_level
tempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,
Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, =Recovery=3DSIMPLE, Version=3D539, =Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, =IsAutoCreateStatistics, IsAutoUpdateStatistics,80
exec sp_spaceused
name,fileid,filename,filegroup,size,maxsize,growth,usage
tempdev,1,C:\Program Files\Microsoft SQL =Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data only
templog,2,C:\Program Files\Microsoft SQL =Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log only
database_name,database_size,unallocated space
tempdb,89.06 MB,87.66 MB
reserved,data,index_size,unused
672 KB,184 KB,400 KB,88 KB
exec master..xp_fixeddrives
drive,MB free
C,36715
D,371955
Please help. Where is the issue. I can manually increase the =size of the tempdb and so there is no permission issue.
-Nags
--=_NextPart_000_002D_01C47BC2.E1462900
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

This error basically comes about when =the log needs to autogrow and it can't do it fast enough. There still is no =excuse for having the tempdb files that small. Sure this situation may come =up at any time regardless of the size if the conditions are wrong but you are =putting yourself in a position for this to happen right away. Any time you =can do something proactively to avoid an issue you should do it. The =other thing is that it sounds like your hardware is not able to keep up with the =autogrow request. Growing is a very resource intensive process and if the =hardware (CPU, Disks etc) are busy or inadequate you can get this =condition. Make sure you don't have high disk or cpu queues. Also make sure the =autogrowth size is only at a point where it can keep up with the hardware. By this =I mean you don't want to autogrow at 10% if you have a 10GB file and slow =disks. Make it a size in MB that it can easily grow with little =effort.
-- Andrew J. Kelly SQL MVP
"Nags" wrote =in message news:eAH$0G%23eEHA.=3520@.TK2MSFTNGP10.phx.gbl...
I cannot do that. This is a new =server that we built and this could be a configuration issue. We are in the =process of upgrading our production server and what if similar problem occurs =on production. I can allocate about 2 Gig for temp db and 2 gig for =temp log.. and one day a huge load comes on the server (which we are =expecting in next few months).. we will get the same error. I cannot afford =this on production.

-Nags
"Andrew J. Kelly" wrote in message news:eD$X289eEHA.2028=@.tk2msftngp13.phx.gbl...
Why don't you just make tempdb =larger than you need and forget about this issue. You only have it at 89MB and =the log at less than a MB. Disk space is way too cheap these days to =deal with issues like this. Make it bigger and move on.
-- Andrew J. Kelly SQL MVP
"Nags" =wrote in message news:edPDW$8eEHA.1652=@.TK2MSFTNGP09.phx.gbl...
This is the error that I am =getting in the error log

The log file for database ='tempdb' is full. Back up the transaction log for the database to free up some log space..
Error: 9002, Severity: 17, State: = 6

This is the information of my =database server.

exec sp_helpdb =tempdbname,db_size,owner,dbid,created,status,compatibility_l=eveltempdb, 89.06 MB,sa,2,Aug 2 =2004,Status=3DONLINE,Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, Recovery=3DSIMPLE, Version=3D539, Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, IsAutoCreateStatistics, =IsAutoUpdateStatistics,80exec =sp_spaceusedname,fileid,filename,filegroup,size,maxsize,grow=th,usagetempdev,1,C:\Program Files\Microsoft SQL Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data onlytemplog,2,C:\Program Files\Microsoft =SQL Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log onlydatabase_name,database_size,unallocated =spacetempdb,89.06 MB,87.66 MBreserved,data,index_size,unused672 =KB,184 KB,400 KB,88 KBexec master..xp_fixeddrivesdrive,MB freeC,36715D,371955
Please help. Where is the =issue. I can manually increase the size of the tempdb and so there is no permission issue.

-Nags


--=_NextPart_000_002D_01C47BC2.E1462900--|||This is a multi-part message in MIME format.
--=_NextPart_000_03A7_01C47BCF.0B5D7F20
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Please understand the situation.. the size of the tempdb is as I gave =below when it gave an error ie. just 768KB . I would assume that the =log file should be huge enough for the error to occur.
The log file for the tempdb now is just 20 MB. And it is the brand new =server with the latest hardware and latest disks and latest bus speed. =IO for a 20 MB file cannot be a bottleneck. It is giving an error for =the log file, and it is so small that even if it has to grow 10% it =would be only 2 mb. This should not give an error. That's was I am =concerned about.
If it is production, we allocate about 2 Gig temp db and let it autogrow =by 10%. With your recommendation, we will let it autogrow by 5 MB.
-Nags
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message =news:#H4stR#eEHA.2812@.tk2msftngp13.phx.gbl...
This error basically comes about when the log needs to autogrow and it =can't do it fast enough. There still is no excuse for having the tempdb =files that small. Sure this situation may come up at any time =regardless of the size if the conditions are wrong but you are putting =yourself in a position for this to happen right away. Any time you can =do something proactively to avoid an issue you should do it. The other =thing is that it sounds like your hardware is not able to keep up with =the autogrow request. Growing is a very resource intensive process and =if the hardware (CPU, Disks etc) are busy or inadequate you can get this =condition. Make sure you don't have high disk or cpu queues. Also make =sure the autogrowth size is only at a point where it can keep up with =the hardware. By this I mean you don't want to autogrow at 10% if you =have a 10GB file and slow disks. Make it a size in MB that it can =easily grow with little effort.
-- Andrew J. Kelly SQL MVP

"Nags" <nags@.DontSpamMe.com> wrote in message =news:eAH$0G%23eEHA.3520@.TK2MSFTNGP10.phx.gbl...
I cannot do that. This is a new server that we built and this could =be a configuration issue. We are in the process of upgrading our =production server and what if similar problem occurs on production. I =can allocate about 2 Gig for temp db and 2 gig for temp log.. and one =day a huge load comes on the server (which we are expecting in next few =months).. we will get the same error. I cannot afford this on =production.
-Nags
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message =news:eD$X289eEHA.2028@.tk2msftngp13.phx.gbl...
Why don't you just make tempdb larger than you need and forget =about this issue. You only have it at 89MB and the log at less than a =MB. Disk space is way too cheap these days to deal with issues like =this. Make it bigger and move on.
-- Andrew J. Kelly SQL MVP

"Nags" <nags@.DontSpamMe.com> wrote in message =news:edPDW$8eEHA.1652@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting in the error log
The log file for database 'tempdb' is full. Back up the =transaction log for the database to free up some log space..
Error: 9002, Severity: 17, State: 6
This is the information of my database server.
exec sp_helpdb tempdb
name,db_size,owner,dbid,created,status,compatibility_level
tempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,
Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, =Recovery=3DSIMPLE, Version=3D539, =Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, =IsAutoCreateStatistics, IsAutoUpdateStatistics,80
exec sp_spaceused
name,fileid,filename,filegroup,size,maxsize,growth,usage
tempdev,1,C:\Program Files\Microsoft SQL =Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data only
templog,2,C:\Program Files\Microsoft SQL =Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log only
database_name,database_size,unallocated space
tempdb,89.06 MB,87.66 MB
reserved,data,index_size,unused
672 KB,184 KB,400 KB,88 KB
exec master..xp_fixeddrives
drive,MB free
C,36715
D,371955
Please help. Where is the issue. I can manually increase the =size of the tempdb and so there is no permission issue.
-Nags
--=_NextPart_000_03A7_01C47BCF.0B5D7F20
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Please understand the situation.. the =size of the tempdb is as I gave below when it gave an error ie. just =768KB . I would assume that the log file should be huge enough for the error to occur.
The log file for the tempdb now is just =20 MB. And it is the brand new server with the latest hardware and =latest disks and latest bus speed. IO for a 20 MB file cannot be a bottleneck. It is giving an error for the log file, and it is so =small that even if it has to grow 10% it would be only 2 mb. This should =not give an error. That's was I am concerned about.
If it is production, we allocate about =2 Gig temp db and let it autogrow by 10%. With your recommendation, we will =let it autogrow by 5 MB.
-Nags
"Andrew J. Kelly" wrote in message news:#H4stR#eEHA.2812=@.tk2msftngp13.phx.gbl...
This error basically comes about when =the log needs to autogrow and it can't do it fast enough. There still is =no excuse for having the tempdb files that small. Sure this =situation may come up at any time regardless of the size if the conditions are wrong =but you are putting yourself in a position for this to happen right =away. Any time you can do something proactively to avoid an issue you should do it. The other thing is that it sounds like your hardware is not =able to keep up with the autogrow request. Growing is a very resource =intensive process and if the hardware (CPU, Disks etc) are busy or inadequate =you can get this condition. Make sure you don't have high disk or cpu =queues. Also make sure the autogrowth size is only at a point where it can =keep up with the hardware. By this I mean you don't want to autogrow at =10% if you have a 10GB file and slow disks. Make it a size in MB that =it can easily grow with little effort.
-- Andrew J. Kelly SQL MVP
"Nags" =wrote in message news:eAH$0G%23eEHA.=3520@.TK2MSFTNGP10.phx.gbl...
I cannot do that. This is a =new server that we built and this could be a configuration issue. We are =in the process of upgrading our production server and what if similar =problem occurs on production. I can allocate about 2 Gig for temp db =and 2 gig for temp log.. and one day a huge load comes on the server (which we =are expecting in next few months).. we will get the same error. I =cannot afford this on production.

-Nags
"Andrew J. Kelly" wrote in message news:eD$X289eEHA.2028=@.tk2msftngp13.phx.gbl...
Why don't you just make tempdb =larger than you need and forget about this issue. You only have it at =89MB and the log at less than a MB. Disk space is way too cheap these =days to deal with issues like this. Make it bigger and move =on.
-- Andrew J. Kelly SQL MVP
"Nags" =wrote in message news:edPDW$8eEHA.1652=@.TK2MSFTNGP09.phx.gbl...
This is the error that I am =getting in the error log

The log file for database ='tempdb' is full. Back up the transaction log for the database to free up some log = space..
Error: 9002, Severity: 17, =State: 6

This is the information of my =database server.

exec sp_helpdb =tempdbname,db_size,owner,dbid,created,status,compatibility_l=eveltempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,Updateability=3DREAD_WRITE, =UserAccess=3DMULTI_USER, Recovery=3DSIMPLE, Version=3D539, =Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, IsAutoCreateStatistics, IsAutoUpdateStatistics,80exec =sp_spaceusedname,fileid,filename,filegroup,size,maxsize,grow=th,usagetempdev,1,C:\Program Files\Microsoft SQL Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data onlytemplog,2,C:\Program =Files\Microsoft SQL Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log onlydatabase_name,database_size,unallocated spacetempdb,89.06 MB,87.66 MBreserved,data,index_size,unused672 KB,184 =KB,400 KB,88 KBexec =master..xp_fixeddrivesdrive,MB freeC,36715D,371955
Please help. Where is the = issue. I can manually increase the size of the tempdb and =so there is no permission issue.

-Nags


--=_NextPart_000_03A7_01C47BCF.0B5D7F20--|||This is a multi-part message in MIME format.
--=_NextPart_000_0032_01C47BD4.4B0F17A0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Just because it is the latest Disks and hardware does not mean it is =working properly. Have you monitored the system to ensure there are no =Disk, memory or CPU bottlenecks? I have seen issues similar to this =when the Raid array was broken and it was computing the parity and =slowing everything down dramatically. From your xp_fixeddrives output =it looks like you only have at most 2 drive arrays,( C: & D:). Are =these 2 physical arrays or 1 array with 2 logical drives? Do you have =tempdb, tempdb logs on the same drive as the other databases and log =files? What kind of array is it? And by the way 5MB is probably too small. You are correct in that a few =MB should not be a problem on properly operating hardware. While you =want to ensure you don't grow in too large an amount you don't want it =way too small either.
-- Andrew J. Kelly SQL MVP

"Nags" <nags@.DontSpamMe.com> wrote in message =news:uOijUC$eEHA.2896@.TK2MSFTNGP11.phx.gbl...
Please understand the situation.. the size of the tempdb is as I gave =below when it gave an error ie. just 768KB . I would assume that the =log file should be huge enough for the error to occur.
The log file for the tempdb now is just 20 MB. And it is the brand =new server with the latest hardware and latest disks and latest bus =speed. IO for a 20 MB file cannot be a bottleneck. It is giving an =error for the log file, and it is so small that even if it has to grow =10% it would be only 2 mb. This should not give an error. That's was I =am concerned about.
If it is production, we allocate about 2 Gig temp db and let it =autogrow by 10%. With your recommendation, we will let it autogrow by 5 =MB.
-Nags
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message =news:#H4stR#eEHA.2812@.tk2msftngp13.phx.gbl...
This error basically comes about when the log needs to autogrow and =it can't do it fast enough. There still is no excuse for having the =tempdb files that small. Sure this situation may come up at any time =regardless of the size if the conditions are wrong but you are putting =yourself in a position for this to happen right away. Any time you can =do something proactively to avoid an issue you should do it. The other =thing is that it sounds like your hardware is not able to keep up with =the autogrow request. Growing is a very resource intensive process and =if the hardware (CPU, Disks etc) are busy or inadequate you can get this =condition. Make sure you don't have high disk or cpu queues. Also make =sure the autogrowth size is only at a point where it can keep up with =the hardware. By this I mean you don't want to autogrow at 10% if you =have a 10GB file and slow disks. Make it a size in MB that it can =easily grow with little effort.
-- Andrew J. Kelly SQL MVP

"Nags" <nags@.DontSpamMe.com> wrote in message =news:eAH$0G%23eEHA.3520@.TK2MSFTNGP10.phx.gbl...
I cannot do that. This is a new server that we built and this =could be a configuration issue. We are in the process of upgrading our =production server and what if similar problem occurs on production. I =can allocate about 2 Gig for temp db and 2 gig for temp log.. and one =day a huge load comes on the server (which we are expecting in next few =months).. we will get the same error. I cannot afford this on =production.
-Nags
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message =news:eD$X289eEHA.2028@.tk2msftngp13.phx.gbl...
Why don't you just make tempdb larger than you need and forget =about this issue. You only have it at 89MB and the log at less than a =MB. Disk space is way too cheap these days to deal with issues like =this. Make it bigger and move on.
-- Andrew J. Kelly SQL MVP

"Nags" <nags@.DontSpamMe.com> wrote in message =news:edPDW$8eEHA.1652@.TK2MSFTNGP09.phx.gbl...
This is the error that I am getting in the error log
The log file for database 'tempdb' is full. Back up the =transaction log for the database to free up some log space..
Error: 9002, Severity: 17, State: 6
This is the information of my database server.
exec sp_helpdb tempdb
name,db_size,owner,dbid,created,status,compatibility_level
tempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,
Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, =Recovery=3DSIMPLE, Version=3D539, =Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, =IsAutoCreateStatistics, IsAutoUpdateStatistics,80
exec sp_spaceused
name,fileid,filename,filegroup,size,maxsize,growth,usage
tempdev,1,C:\Program Files\Microsoft SQL =Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 KB,Unlimited,10%,data only
templog,2,C:\Program Files\Microsoft SQL =Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log only
database_name,database_size,unallocated space
tempdb,89.06 MB,87.66 MB
reserved,data,index_size,unused
672 KB,184 KB,400 KB,88 KB
exec master..xp_fixeddrives
drive,MB free
C,36715
D,371955
Please help. Where is the issue. I can manually increase the =size of the tempdb and so there is no permission issue.
-Nags
--=_NextPart_000_0032_01C47BD4.4B0F17A0
Content-Type: text/html;
charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Just because it is the latest Disks and =hardware does not mean it is working properly. Have you monitored the =system to ensure there are no Disk, memory or CPU bottlenecks? I have seen =issues similar to this when the Raid array was broken and it was computing the =parity and slowing everything down dramatically. From your xp_fixeddrives =output it looks like you only have at most 2 drive arrays,( C: & D:). =Are these 2 physical arrays or 1 array with 2 logical drives? Do you =have tempdb, tempdb logs on the same drive as the other databases and log files? What kind of array is it?
And by the way 5MB is probably too =small. You are correct in that a few MB should not be a problem on properly =operating hardware. While you want to ensure you don't grow in too large an =amount you don't want it way too small either.
-- Andrew J. Kelly SQL MVP
"Nags" wrote =in message news:uOijUC$eEHA.2896=@.TK2MSFTNGP11.phx.gbl...
Please understand the situation.. the =size of the tempdb is as I gave below when it gave an error ie. just =768KB . I would assume that the log file should be huge enough for the error to occur.

The log file for the tempdb now is =just 20 MB. And it is the brand new server with the latest hardware and =latest disks and latest bus speed. IO for a 20 MB file cannot be a bottleneck. It is giving an error for the log file, and it is so =small that even if it has to grow 10% it would be only 2 mb. This =should not give an error. That's was I am concerned about.

If it is production, we allocate =about 2 Gig temp db and let it autogrow by 10%. With your recommendation, we will =let it autogrow by 5 MB.

-Nags
"Andrew J. Kelly" wrote in message news:#H4stR#eEHA.2812=@.tk2msftngp13.phx.gbl...
This error basically comes about =when the log needs to autogrow and it can't do it fast enough. There still =is no excuse for having the tempdb files that small. Sure this =situation may come up at any time regardless of the size if the conditions are =wrong but you are putting yourself in a position for this to happen right =away. Any time you can do something proactively to avoid an issue you =should do it. The other thing is that it sounds like your hardware is =not able to keep up with the autogrow request. Growing is a very =resource intensive process and if the hardware (CPU, Disks etc) are busy or inadequate you can get this condition. Make sure you don't =have high disk or cpu queues. Also make sure the autogrowth size is only at a =point where it can keep up with the hardware. By this I mean you =don't want to autogrow at 10% if you have a 10GB file and slow disks. =Make it a size in MB that it can easily grow with little effort.
-- Andrew J. Kelly SQL MVP
"Nags" =wrote in message news:eAH$0G%23eEHA.=3520@.TK2MSFTNGP10.phx.gbl...
I cannot do that. This is a =new server that we built and this could be a configuration issue. We =are in the process of upgrading our production server and what if similar =problem occurs on production. I can allocate about 2 Gig for temp db =and 2 gig for temp log.. and one day a huge load comes on the server =(which we are expecting in next few months).. we will get the same =error. I cannot afford this on production.

-Nags
"Andrew J. Kelly" wrote in message news:eD$X289eEHA.2028=@.tk2msftngp13.phx.gbl...
Why don't you just make tempdb =larger than you need and forget about this issue. You only have it at =89MB and the log at less than a MB. Disk space is way too cheap =these days to deal with issues like this. Make it bigger and move on.
-- Andrew J. Kelly SQL =MVP
"Nags" wrote in = message news:edPDW$8eEHA.1652=@.TK2MSFTNGP09.phx.gbl...
This is the error that I am =getting in the error log

The log file for database ='tempdb' is full. Back up the transaction log for the database to free up =some log space..
Error: 9002, Severity: 17, =State: 6

This is the information of my =database server.

exec sp_helpdb =tempdbname,db_size,owner,dbid,created,status,compatibility_l=eveltempdb, 89.06 MB,sa,2,Aug 2 2004,Status=3DONLINE,Updateability=3DREAD_WRITE, UserAccess=3DMULTI_USER, Recovery=3DSIMPLE, Version=3D539, Collation=3DSQL_Latin1_General_CP1_CI_AS, SQLSortOrder=3D52, IsAutoCreateStatistics, =IsAutoUpdateStatistics,80exec =sp_spaceusedname,fileid,filename,filegroup,size,maxsize,grow=th,usagetempdev,1,C:\Program Files\Microsoft SQL Server\MSSQL\data\tempdb.mdf,PRIMARY,90432 = KB,Unlimited,10%,data onlytemplog,2,C:\Program =Files\Microsoft SQL Server\MSSQL\data\templog.ldf,,768 KB,Unlimited,10%,log onlydatabase_name,database_size,unallocated spacetempdb,89.06 MB,87.66 MBreserved,data,index_size,unused672 KB,184 =KB,400 KB,88 KBexec master..xp_fixeddrivesdrive,MB freeC,36715D,371955
Please help. Where is =the issue. I can manually increase the size of the tempdb =and so there is no permission issue.

-Nags


--=_NextPart_000_0032_01C47BD4.4B0F17A0--

Wednesday, March 7, 2012

Hyphen padding in t-sql query result

A newbie question here:

In T-SQL my query to text file results in the padded second line containing a set of hyphens. How do I turn that off so that just the first line contains the headers and the results immediately follow in line 2.

Thanks!

You need to describe your problem a bit. It would help to have relevant code, data definition, what the incorrect results look like and the correct results look like. It would also help to have the associated sample data.|||

Certainly.

I'm composing a job in sql server using sqlcmd. Here's my command-line

EXEC master..xp_cmdshell 'sqlcmd -X -S server -q "SET NOCOUNT ON SELECT * FROM pubs.dbo.test" -s , -w 255 -u -o "\\server\output.csv"'

Output is just fine with the exception of that pesky line 2 of padded hyphens. Appears like this:

department,FundCode,Month

-,--,--,
DM ,SAZXZ , 8

DM ,ESJXZ , 7
DM ,DGLXS , 9
GN ,DGLXZ , 8

|||

That 'pesky line 2' is part of the output 'header'.

You can turn the headers off (that will include the line 1 with the column names) by adding this switch to the SQLCmd command line:

-h-1

(No space between the h and the -1.)

You probably don't need line 1 or 'pesky line 2' in your application.

As far as I am aware, those are your choices...

|||

Plus, if you don't like their output, it is pretty easy to build a console app to get the results. I had to do it once to deal with Text values that were very large and I couldn't get sqlcmd to do.

In 2005, you might even be able to write a CLR stored procedure (albeit an unsafe one) that will do the work directly that you have complete control over.

|||I was aware of the -h switch but I'd prefer to keep the headers to be explicit. If sqlcmd doesn't support it then I'll have to write something up. Thanks for your input everyone!|||This is kind of a cheesy solution, but it works.

Use this command:

EXEC master..xp_cmdshell 'sqlcmd -X -S server -q "SET NOCOUNT ON Select 'department','FundCode','Month' SELECT * FROM pubs.dbo.test" -s , -w 255 -u -o "\\server\output.csv" -h-1'

Notice the header switch (-h) has a value of -1 which means that it is off. The first Select statement returns one more row of results which contains the "headers"

Your output should look like this:

department,FundCode,Month

DM ,SAZXZ , 8

DM ,ESJXZ , 7
DM ,DGLXS , 9
GN ,DGLXZ , 8


|||

Brilliant! I had to use double single-quotes (e.g., ''Department'') to get it to parse correctly but this is what I needed.

Not cheesy at all. I too hide my inelegance with cleverness.

Sunday, February 19, 2012

hw to add text in a 'select' statement

Hi Friends,
Could you tell me how to add text in a select statement.
For eg:
SELECT ((datepart(MM,CURRENT_TIMESTAMP)+ '-'
+(datepart(dd,CURRENT_TIMESTAMP) + '-' +
datepart(yyyy,CURRENT_TIMESTAMP))AS
"CURRENT_DATE"
..............ofcourse,this is not correct but it reflects my idea.
Thanks & Regards
Dutt
Seems you got the general idea. The problem is that DATEPART returns some integer value and you
cannot concatenate that with a string (the string will be converted to an int which will fail). So
you would have to do a CAST around each DATEPART call to make it some char or varchar. But check
first if there already exists a format for the CONVERT function that has the formatting you desire
(check the 3:rd parameter in Books Online).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Dutt" <Mr.Dutt@.gmail.com> wrote in message
news:1140021331.883831.135940@.g14g2000cwa.googlegr oups.com...
> Hi Friends,
> Could you tell me how to add text in a select statement.
> For eg:
> SELECT ((datepart(MM,CURRENT_TIMESTAMP)+ '-'
> +(datepart(dd,CURRENT_TIMESTAMP) + '-' +
> datepart(yyyy,CURRENT_TIMESTAMP))AS
> "CURRENT_DATE"
> .............ofcourse,this is not correct but it reflects my idea.
> Thanks & Regards
> Dutt
>
|||Thanks Tibor,
Its really useful link....It worked .

hw to add text in a 'select' statement

Hi Friends,
Could you tell me how to add text in a select statement.
For eg:
SELECT ((datepart(MM,CURRENT_TIMESTAMP)+ '-'
+(datepart(dd,CURRENT_TIMESTAMP) + '-' +
datepart(yyyy,CURRENT_TIMESTAMP))AS
"CURRENT_DATE"
.............ofcourse,this is not correct but it reflects my idea.
Thanks & Regards
DuttSeems you got the general idea. The problem is that DATEPART returns some in
teger value and you
cannot concatenate that with a string (the string will be converted to an in
t which will fail). So
you would have to do a CAST around each DATEPART call to make it some char o
r varchar. But check
first if there already exists a format for the CONVERT function that has the
formatting you desire
(check the 3:rd parameter in Books Online).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Dutt" <Mr.Dutt@.gmail.com> wrote in message
news:1140021331.883831.135940@.g14g2000cwa.googlegroups.com...
> Hi Friends,
> Could you tell me how to add text in a select statement.
> For eg:
> SELECT ((datepart(MM,CURRENT_TIMESTAMP)+ '-'
> +(datepart(dd,CURRENT_TIMESTAMP) + '-' +
> datepart(yyyy,CURRENT_TIMESTAMP))AS
> "CURRENT_DATE"
> .............ofcourse,this is not correct but it reflects my idea.
> Thanks & Regards
> Dutt
>|||Thanks Tibor,
Its really useful link....It worked .

hw to add text in a 'select' statement

Hi Friends,
Could you tell me how to add text in a select statement.
For eg:
SELECT ((datepart(MM,CURRENT_TIMESTAMP)+ '-'
+(datepart(dd,CURRENT_TIMESTAMP) + '-' +
datepart(yyyy,CURRENT_TIMESTAMP))AS
"CURRENT_DATE"
.............ofcourse,this is not correct but it reflects my idea.
Thanks & Regards
DuttSeems you got the general idea. The problem is that DATEPART returns some integer value and you
cannot concatenate that with a string (the string will be converted to an int which will fail). So
you would have to do a CAST around each DATEPART call to make it some char or varchar. But check
first if there already exists a format for the CONVERT function that has the formatting you desire
(check the 3:rd parameter in Books Online).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Dutt" <Mr.Dutt@.gmail.com> wrote in message
news:1140021331.883831.135940@.g14g2000cwa.googlegroups.com...
> Hi Friends,
> Could you tell me how to add text in a select statement.
> For eg:
> SELECT ((datepart(MM,CURRENT_TIMESTAMP)+ '-'
> +(datepart(dd,CURRENT_TIMESTAMP) + '-' +
> datepart(yyyy,CURRENT_TIMESTAMP))AS
> "CURRENT_DATE"
> .............ofcourse,this is not correct but it reflects my idea.
> Thanks & Regards
> Dutt
>|||Thanks Tibor,
Its really useful link....It worked .