Monday, March 26, 2012

How to determine space used by a file

Nags,
You could automate all of this using 'Performance
Condition' Alerts.
You can define a alert to be fired if the db size goes
above 3 GB (you have to put this number in KB's though).
As a response to the alert, you could setup a SQL Server
job to increase the database to appropriate size.
The only catch here is that you will have to be creative
in automatically increasing the size by 4 GB. Usually the
jobs I creat jobs that have a fixed db size specified.
You can also be notified when this alert fires and
increase db size in your script after the script has ran.
This will work the next time your db gets full. You will
also have to adjust the parameter for the db size (KB
value)
Hope this helps,
DeeJay

>--Original Message--
>How do I determine the space used in a file ? We have a
4 Gig file pre
>allocated. We want to be sure that there is enough space
in that. We want
>to run scripts to warn us if the space available in a
file group is less
>than one Gig. If it is less then 1 Gig we want to
allocate another 4 Gig
>file.
>-Nags
>
>.
>My Database is about 60 Gig with several filegroups. My problem is not to
preallocate too much and not to have space issues. We want to preallocate
to avoid disk defragmentation. I need to find out how much space is free in
a file, calculate the total free space in a file group and if it is < than 1
gig, only then add the new file. I hope you understand my requirement. Its
not the db size I am interested, I need to know the amount on data in each
file.
-Nags
"DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
news:00ae01c423f4$cf7dff70$a101280a@.phx.gbl...[vbcol=seagreen]
> Nags,
> You could automate all of this using 'Performance
> Condition' Alerts.
> You can define a alert to be fired if the db size goes
> above 3 GB (you have to put this number in KB's though).
> As a response to the alert, you could setup a SQL Server
> job to increase the database to appropriate size.
> The only catch here is that you will have to be creative
> in automatically increasing the size by 4 GB. Usually the
> jobs I creat jobs that have a fixed db size specified.
> You can also be notified when this alert fires and
> increase db size in your script after the script has ran.
> This will work the next time your db gets full. You will
> also have to adjust the parameter for the db size (KB
> value)
>
> Hope this helps,
> DeeJay
>
> 4 Gig file pre
> in that. We want
> file group is less
> allocate another 4 Gig|||Adding new files in response to low free space in the db is a very bad idea.
SQL Server uses a proportional fill algorithm that is based on the amount of
free space in each file in the filegroup. The ideal situation would be to
have multiple files all of the same size with the same amount of free space.
In your case if you added a new file it would put most of the new data in
the new file and defeat some of the purpose of spreading the data over
multiple files. What you should do is manually (or script not autogrow)
grow each existing file so they all are always the same size.
Andrew J. Kelly SQL MVP
"Nags" <nags@.DontSpamMe.com> wrote in message
news:unOwBl$IEHA.3120@.TK2MSFTNGP11.phx.gbl...
> My Database is about 60 Gig with several filegroups. My problem is not to
> preallocate too much and not to have space issues. We want to preallocate
> to avoid disk defragmentation. I need to find out how much space is free
in
> a file, calculate the total free space in a file group and if it is < than
1
> gig, only then add the new file. I hope you understand my requirement.
Its
> not the db size I am interested, I need to know the amount on data in each
> file.
> -Nags
> "DeeJay Puar" <deejaypuar@.yahoo.com> wrote in message
> news:00ae01c423f4$cf7dff70$a101280a@.phx.gbl...
>|||Let me explain on how we are managing our disk space and file groups more
clearly.
What we had before
- we have at present taken size estimations of all our current tables and
our database size.
- production database is increasing by about 50 % every year based on 3 year
data
- we had disk contention
- we had performance issues because of this
What we did to avoid it
- we have 3 major tables which we have put them in individual file group and
their indexes in individual file group
- we moved all transactions tables (only inserts) into a separate file group
and their indexes into another file group
- we moved all major accessed tables into a separate file group and their
indexes into another file group
- we moved all other tables into a separate file group and their indexes
into another file group
- we distributed these file groups onto 4 disk volumes, two volumes are 3
disk raid 5 and 2 volumes are disk mirrored.
- each file group has been allocated twice the space required
- each file has been created with 4 Gig
- file groups have been created so that we can move individual file groups
to different disk sets when volume grows and we need better performance
- each file has been created with 4 Gig so that disk space is preallocated
and does not expand as required, which might cause disk defragmentation
- we are collecting statistics periodically to find out which disk drive has
contention, and if we find it, we will be putting new volumes and moving the
file groups on to them
- each file group is made of multiple files
The requirement is to find out if any of the file group is running out of
space. In short I want to determine how much is being used and how much
space is free within a file. Can you give me this info please ?
-Nags
"Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
news:#R35MOAJEHA.1144@.TK2MSFTNGP10.phx.gbl...
> Adding new files in response to low free space in the db is a very bad
idea.
> SQL Server uses a proportional fill algorithm that is based on the amount
of
> free space in each file in the filegroup. The ideal situation would be to
> have multiple files all of the same size with the same amount of free
space.
> In your case if you added a new file it would put most of the new data in
> the new file and defeat some of the purpose of spreading the data over
> multiple files. What you should do is manually (or script not autogrow)
> grow each existing file so they all are always the same size.
> --
> Andrew J. Kelly SQL MVP
>
> "Nags" <nags@.DontSpamMe.com> wrote in message
> news:unOwBl$IEHA.3120@.TK2MSFTNGP11.phx.gbl...
to[vbcol=seagreen]
preallocate[vbcol=seagreen]
free[vbcol=seagreen]
> in
than[vbcol=seagreen]
> 1
> Its
each[vbcol=seagreen]
>|||A 3 disk RAID 5 is about the worst performing RAID there is. You have
probably made much more work than necessary and certainly do not have an
optimum setup. I would venture to guess you could take those 10 disks and
make a RAID 10 with them instead. Then you could place all the files on the
RAID 10 and would probably get better performance than spreading it out like
you have. This is assuming you already have the logs on their own RAID 1.
If your getting a lot of disk contention you might also ensure you have
enough ram as well.
Andrew J. Kelly SQL MVP
"Nags" <nags@.DontSpamMe.com> wrote in message
news:%23tGJozhJEHA.3216@.tk2msftngp13.phx.gbl...
> Let me explain on how we are managing our disk space and file groups more
> clearly.
> What we had before
> - we have at present taken size estimations of all our current tables and
> our database size.
> - production database is increasing by about 50 % every year based on 3
year
> data
> - we had disk contention
> - we had performance issues because of thisly made more work athn
> What we did to avoid it
> - we have 3 major tables which we have put them in individual file group
and
> their indexes in individual file group
> - we moved all transactions tables (only inserts) into a separate file
group
> and their indexes into another file group
> - we moved all major accessed tables into a separate file group and their
> indexes into another file group
> - we moved all other tables into a separate file group and their indexes
> into another file group
> - we distributed these file groups onto 4 disk volumes, two volumes are 3
> disk raid 5 and 2 volumes are disk mirrored.
> - each file group has been allocated twice the space required
> - each file has been created with 4 Gig
> - file groups have been created so that we can move individual file groups
> to different disk sets when volume grows and we need better performance
> - each file has been created with 4 Gig so that disk space is preallocated
> and does not expand as required, which might cause disk defragmentation
> - we are collecting statistics periodically to find out which disk drive
has
> contention, and if we find it, we will be putting new volumes and moving
the
> file groups on to them
> - each file group is made of multiple files
> The requirement is to find out if any of the file group is running out of
> space. In short I want to determine how much is being used and how much
> space is free within a file. Can you give me this info please ?
>
> -Nags
> "Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
> news:#R35MOAJEHA.1144@.TK2MSFTNGP10.phx.gbl...
> idea.
amount[vbcol=seagreen]
> of
to[vbcol=seagreen]
> space.
in[vbcol=seagreen]
not[vbcol=seagreen]
> to
> preallocate
> free
> than
requirement.[vbcol=seagreen]
> each
>|||Nags
>- we have 3 major tables which we have put them in >individual file group
and
>their indexes in individual file group
Do you have separate filegroups for the tables and their indexes?
If they are in separate filegroups, then you cannot back up and restore them
as a single unit
"Andrew J. Kelly" <sqlmvpnoooospam@.shadhawk.com> wrote in message
news:eLYz5gnJEHA.3688@.TK2MSFTNGP10.phx.gbl...
> A 3 disk RAID 5 is about the worst performing RAID there is. You have
> probably made much more work than necessary and certainly do not have an
> optimum setup. I would venture to guess you could take those 10 disks and
> make a RAID 10 with them instead. Then you could place all the files on
the
> RAID 10 and would probably get better performance than spreading it out
like
> you have. This is assuming you already have the logs on their own RAID 1.
> If your getting a lot of disk contention you might also ensure you have
> enough ram as well.
> --
> Andrew J. Kelly SQL MVP
>
> "Nags" <nags@.DontSpamMe.com> wrote in message
> news:%23tGJozhJEHA.3216@.tk2msftngp13.phx.gbl...
more[vbcol=seagreen]
and[vbcol=seagreen]
> year
> and
> group
their[vbcol=seagreen]
3[vbcol=seagreen]
groups[vbcol=seagreen]
preallocated[vbcol=seagreen]
> has
> the
of[vbcol=seagreen]
> amount
be[vbcol=seagreen]
> to
> in
autogrow)[vbcol=seagreen]
> not
<[vbcol=seagreen]
> requirement.
in[vbcol=seagreen]
>

No comments:

Post a Comment