Tuesday, March 20, 2012
authentication and deployment
We have out first report ready and wanted to go through the deployment from
"dev to test" and then "test to prod". The reports are all called via a web
application.
A few issues.
a. How do I deploy the report so that it will point to different databases
(dev, test, prod)
b. How do I get around having the login dialog box come up? The user has
already logged into the web app. We are not using Active Directory. We keep
login credentials in our own database.
Are there any good links to help me through these issues?
Thanks in advance.deploying the reports and access a different database is simple.
when you create your report, use a shared datasource.
after the deployment, you can change the connectionstring contain in this
shared datasource using the report manager interface
(http://localhost/reports)
The users are prompted for a login because, by default, RS use the windows
authentication to identify the user.
If you want to use your own security system with RS, you have to develop it
and this feature is available in the enterprise edition of RS only.
There is other options but these options depend on how you render your
reports to the end user.
do you use RS2005 + ASP.Net 2.0?
"dgator" <dgator@.discussions.microsoft.com> wrote in message
news:EE41CA2D-19BB-42EA-93F5-8721121185C7@.microsoft.com...
> We are in the process of converting all of our crystal reports to ssrs
> 2005.
> We have out first report ready and wanted to go through the deployment
> from
> "dev to test" and then "test to prod". The reports are all called via a
> web
> application.
> A few issues.
> a. How do I deploy the report so that it will point to different databases
> (dev, test, prod)
> b. How do I get around having the login dialog box come up? The user has
> already logged into the web app. We are not using Active Directory. We
> keep
> login credentials in our own database.
> Are there any good links to help me through these issues?
>
> Thanks in advance.|||yes, rs 2005 + asp.net 2.0.
I will look at the shared datasource, thanks.
Can you give me more information on using our own security?
Thanks
"Jéjé" wrote:
> deploying the reports and access a different database is simple.
> when you create your report, use a shared datasource.
> after the deployment, you can change the connectionstring contain in this
> shared datasource using the report manager interface
> (http://localhost/reports)
> The users are prompted for a login because, by default, RS use the windows
> authentication to identify the user.
> If you want to use your own security system with RS, you have to develop it
> and this feature is available in the enterprise edition of RS only.
> There is other options but these options depend on how you render your
> reports to the end user.
> do you use RS2005 + ASP.Net 2.0?
> "dgator" <dgator@.discussions.microsoft.com> wrote in message
> news:EE41CA2D-19BB-42EA-93F5-8721121185C7@.microsoft.com...
> > We are in the process of converting all of our crystal reports to ssrs
> > 2005.
> > We have out first report ready and wanted to go through the deployment
> > from
> > "dev to test" and then "test to prod". The reports are all called via a
> > web
> > application.
> >
> > A few issues.
> > a. How do I deploy the report so that it will point to different databases
> > (dev, test, prod)
> > b. How do I get around having the login dialog box come up? The user has
> > already logged into the web app. We are not using Active Directory. We
> > keep
> > login credentials in our own database.
> >
> > Are there any good links to help me through these issues?
> >
> >
> > Thanks in advance.
>
>|||Forms Authentication for Reporting Services
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql2k/html/ufairs.asp
Andy Potter|||first, if you use ASP.Net 2.0, try to use the reportviewer webcontrol.
this control allows you to "control" the identity of the user when you
access report server.
so, your web site can use a specific authentication method (like form based)
and when you call your RS you'll use another identity (<identity
username=..." impersonate=true /> in the web.config)
Also...
You can give access to the anonymous user to your RS installation and setup
your datasource to ask for a login / password. this login / password can be
sent through the webcontrol without displaying this information to the end
user.
"dgator" <dgator@.discussions.microsoft.com> wrote in message
news:C4E0BD38-B30F-421C-BD4F-A331F9574C68@.microsoft.com...
> yes, rs 2005 + asp.net 2.0.
> I will look at the shared datasource, thanks.
> Can you give me more information on using our own security?
> Thanks
> "Jéjé" wrote:
>> deploying the reports and access a different database is simple.
>> when you create your report, use a shared datasource.
>> after the deployment, you can change the connectionstring contain in this
>> shared datasource using the report manager interface
>> (http://localhost/reports)
>> The users are prompted for a login because, by default, RS use the
>> windows
>> authentication to identify the user.
>> If you want to use your own security system with RS, you have to develop
>> it
>> and this feature is available in the enterprise edition of RS only.
>> There is other options but these options depend on how you render your
>> reports to the end user.
>> do you use RS2005 + ASP.Net 2.0?
>> "dgator" <dgator@.discussions.microsoft.com> wrote in message
>> news:EE41CA2D-19BB-42EA-93F5-8721121185C7@.microsoft.com...
>> > We are in the process of converting all of our crystal reports to ssrs
>> > 2005.
>> > We have out first report ready and wanted to go through the deployment
>> > from
>> > "dev to test" and then "test to prod". The reports are all called via
>> > a
>> > web
>> > application.
>> >
>> > A few issues.
>> > a. How do I deploy the report so that it will point to different
>> > databases
>> > (dev, test, prod)
>> > b. How do I get around having the login dialog box come up? The user
>> > has
>> > already logged into the web app. We are not using Active Directory.
>> > We
>> > keep
>> > login credentials in our own database.
>> >
>> > Are there any good links to help me through these issues?
>> >
>> >
>> > Thanks in advance.
>>|||Do all clients have to have 2.0 installed in order to use the reportviewer
control? I thought I had read this somewhere.
"Jéjé" wrote:
> first, if you use ASP.Net 2.0, try to use the reportviewer webcontrol.
> this control allows you to "control" the identity of the user when you
> access report server.
> so, your web site can use a specific authentication method (like form based)
> and when you call your RS you'll use another identity (<identity
> username=..." impersonate=true /> in the web.config)
> Also...
> You can give access to the anonymous user to your RS installation and setup
> your datasource to ask for a login / password. this login / password can be
> sent through the webcontrol without displaying this information to the end
> user.
>
> "dgator" <dgator@.discussions.microsoft.com> wrote in message
> news:C4E0BD38-B30F-421C-BD4F-A331F9574C68@.microsoft.com...
> > yes, rs 2005 + asp.net 2.0.
> >
> > I will look at the shared datasource, thanks.
> >
> > Can you give me more information on using our own security?
> >
> > Thanks
> >
> > "Jéjé" wrote:
> >
> >> deploying the reports and access a different database is simple.
> >> when you create your report, use a shared datasource.
> >> after the deployment, you can change the connectionstring contain in this
> >> shared datasource using the report manager interface
> >> (http://localhost/reports)
> >>
> >> The users are prompted for a login because, by default, RS use the
> >> windows
> >> authentication to identify the user.
> >> If you want to use your own security system with RS, you have to develop
> >> it
> >> and this feature is available in the enterprise edition of RS only.
> >> There is other options but these options depend on how you render your
> >> reports to the end user.
> >>
> >> do you use RS2005 + ASP.Net 2.0?
> >>
> >> "dgator" <dgator@.discussions.microsoft.com> wrote in message
> >> news:EE41CA2D-19BB-42EA-93F5-8721121185C7@.microsoft.com...
> >> > We are in the process of converting all of our crystal reports to ssrs
> >> > 2005.
> >> > We have out first report ready and wanted to go through the deployment
> >> > from
> >> > "dev to test" and then "test to prod". The reports are all called via
> >> > a
> >> > web
> >> > application.
> >> >
> >> > A few issues.
> >> > a. How do I deploy the report so that it will point to different
> >> > databases
> >> > (dev, test, prod)
> >> > b. How do I get around having the login dialog box come up? The user
> >> > has
> >> > already logged into the web app. We are not using Active Directory.
> >> > We
> >> > keep
> >> > login credentials in our own database.
> >> >
> >> > Are there any good links to help me through these issues?
> >> >
> >> >
> >> > Thanks in advance.
> >>
> >>
> >>
>
>|||no
its a web interface. nothing required on the client.
"dgator" <dgator@.discussions.microsoft.com> wrote in message
news:204EEB75-2A60-4B83-9702-0AD81DD9BC3B@.microsoft.com...
> Do all clients have to have 2.0 installed in order to use the reportviewer
> control? I thought I had read this somewhere.
> "Jéjé" wrote:
>> first, if you use ASP.Net 2.0, try to use the reportviewer webcontrol.
>> this control allows you to "control" the identity of the user when you
>> access report server.
>> so, your web site can use a specific authentication method (like form
>> based)
>> and when you call your RS you'll use another identity (<identity
>> username=..." impersonate=true /> in the web.config)
>> Also...
>> You can give access to the anonymous user to your RS installation and
>> setup
>> your datasource to ask for a login / password. this login / password can
>> be
>> sent through the webcontrol without displaying this information to the
>> end
>> user.
>>
>> "dgator" <dgator@.discussions.microsoft.com> wrote in message
>> news:C4E0BD38-B30F-421C-BD4F-A331F9574C68@.microsoft.com...
>> > yes, rs 2005 + asp.net 2.0.
>> >
>> > I will look at the shared datasource, thanks.
>> >
>> > Can you give me more information on using our own security?
>> >
>> > Thanks
>> >
>> > "Jéjé" wrote:
>> >
>> >> deploying the reports and access a different database is simple.
>> >> when you create your report, use a shared datasource.
>> >> after the deployment, you can change the connectionstring contain in
>> >> this
>> >> shared datasource using the report manager interface
>> >> (http://localhost/reports)
>> >>
>> >> The users are prompted for a login because, by default, RS use the
>> >> windows
>> >> authentication to identify the user.
>> >> If you want to use your own security system with RS, you have to
>> >> develop
>> >> it
>> >> and this feature is available in the enterprise edition of RS only.
>> >> There is other options but these options depend on how you render your
>> >> reports to the end user.
>> >>
>> >> do you use RS2005 + ASP.Net 2.0?
>> >>
>> >> "dgator" <dgator@.discussions.microsoft.com> wrote in message
>> >> news:EE41CA2D-19BB-42EA-93F5-8721121185C7@.microsoft.com...
>> >> > We are in the process of converting all of our crystal reports to
>> >> > ssrs
>> >> > 2005.
>> >> > We have out first report ready and wanted to go through the
>> >> > deployment
>> >> > from
>> >> > "dev to test" and then "test to prod". The reports are all called
>> >> > via
>> >> > a
>> >> > web
>> >> > application.
>> >> >
>> >> > A few issues.
>> >> > a. How do I deploy the report so that it will point to different
>> >> > databases
>> >> > (dev, test, prod)
>> >> > b. How do I get around having the login dialog box come up? The
>> >> > user
>> >> > has
>> >> > already logged into the web app. We are not using Active Directory.
>> >> > We
>> >> > keep
>> >> > login credentials in our own database.
>> >> >
>> >> > Are there any good links to help me through these issues?
>> >> >
>> >> >
>> >> > Thanks in advance.
>> >>
>> >>
>> >>
>>|||I believe that using the reportviewer is the way we should go, but I have yet
to see how to control the indentity of the user.
I have created a generic report user account on the report server machine
and want to use that for credentials. Becuase the user has already logged
into our app, we can trust them. I am not sure setting the impersonate tag
as it may affect the rest of the app in the wrong way. Not real sure here.
Can you please point me towards a good example of using the reportviewer
control with me passing my login credentials? For some reason, I'm not
getting this.
Thanks in advance.
"Jéjé" wrote:
> first, if you use ASP.Net 2.0, try to use the reportviewer webcontrol.
> this control allows you to "control" the identity of the user when you
> access report server.
> so, your web site can use a specific authentication method (like form based)
> and when you call your RS you'll use another identity (<identity
> username=..." impersonate=true /> in the web.config)
> Also...
> You can give access to the anonymous user to your RS installation and setup
> your datasource to ask for a login / password. this login / password can be
> sent through the webcontrol without displaying this information to the end
> user.
>
> "dgator" <dgator@.discussions.microsoft.com> wrote in message
> news:C4E0BD38-B30F-421C-BD4F-A331F9574C68@.microsoft.com...
> > yes, rs 2005 + asp.net 2.0.
> >
> > I will look at the shared datasource, thanks.
> >
> > Can you give me more information on using our own security?
> >
> > Thanks
> >
> > "Jéjé" wrote:
> >
> >> deploying the reports and access a different database is simple.
> >> when you create your report, use a shared datasource.
> >> after the deployment, you can change the connectionstring contain in this
> >> shared datasource using the report manager interface
> >> (http://localhost/reports)
> >>
> >> The users are prompted for a login because, by default, RS use the
> >> windows
> >> authentication to identify the user.
> >> If you want to use your own security system with RS, you have to develop
> >> it
> >> and this feature is available in the enterprise edition of RS only.
> >> There is other options but these options depend on how you render your
> >> reports to the end user.
> >>
> >> do you use RS2005 + ASP.Net 2.0?
> >>
> >> "dgator" <dgator@.discussions.microsoft.com> wrote in message
> >> news:EE41CA2D-19BB-42EA-93F5-8721121185C7@.microsoft.com...
> >> > We are in the process of converting all of our crystal reports to ssrs
> >> > 2005.
> >> > We have out first report ready and wanted to go through the deployment
> >> > from
> >> > "dev to test" and then "test to prod". The reports are all called via
> >> > a
> >> > web
> >> > application.
> >> >
> >> > A few issues.
> >> > a. How do I deploy the report so that it will point to different
> >> > databases
> >> > (dev, test, prod)
> >> > b. How do I get around having the login dialog box come up? The user
> >> > has
> >> > already logged into the web app. We are not using Active Directory.
> >> > We
> >> > keep
> >> > login credentials in our own database.
> >> >
> >> > Are there any good links to help me through these issues?
> >> >
> >> >
> >> > Thanks in advance.
> >>
> >>
> >>
>
>
Thursday, March 8, 2012
Audit trigger with dynamic SQL and Cursor - Am I close?
Client = Access 2000 SP3 (.adp)
Hi, can anyone advise me on creating an audit process to account for any
changes in a table, I thought I'd come up with quite a good idea and started
to do it, but now Im stuck. There are several tables which have 20-40
columns each, the plan was to create an audit table with:
CREATE TABLE [dbo].[tbl_Audit] (
[A_TABLE] [varchar] (50) NOT NULL,
[A_YEAR] [char] (4) NOT NULL ,
[A_VERSION] [varchar] (10) NOT NULL ,
[A_COST_CENTRE] [varchar] (6) NOT NULL ,
[A_ACCOUNT] [varchar] (8) NOT NULL ,
[A_COLUMN] [varchar] (50) NOT NULL ,
[A_OLD_VALUE] [varchar] (1000) NOT NULL ,
[A_NEW_VALUE] [varchar] (1000) NOT NULL ,
[A_DATE] [datetime] NULL ,
[A_USER] [varchar] (50) NULL
) ON [PRIMARY]
GO
This could then be used as a generic audit table, as the table is logged,
the identifying rows are logged (year, version, cost_centre, account), then
the column name, its old and new values.
This was going to be populated by a trigger which fired after an update, it
would loop though the column names and check the values from the inserted
table to the actual table to see if they were different, if they were
different then insert into audit.
I had started to write the trigger, but hit a problem early on, which is I'd
like to create one trigger which fits all tables with only minor tweaking, s
o
I thought I'd use a cursor to loop through the column names, but this means
(I think) using a dynamic sql string, but this seems to fail as the inserted
table goes out of scope, eg when the trigger below fires, it errors with:
Server: Msg 208, Level 16, State 1, Line 2
Invalid object name 'inserted'.
This I assume is because the inserted table is not in scope of the dynamic
SQL string @.SQL1 (below)
alter trigger tg_Estimates_Audit_Update
on dbo.tbl_estimates
after update
as
SET NOCOUNT ON
SET ANSI_NULLS ON
SET ANSI_WARNINGS ON
declare
@.COLUMN varchar(50),
@.SQL1 nvarchar(750),
@.USER varchar(30),
@.TIME datetime
set @.USER = (select suser_sname())
set @.TIME = (select getdate())
declare TableCursor cursor local static for
select COLUMN_NAME from information_schema.columns
where table_name = 'tbl_estimates' and column_name like '%ADJ%'
open TableCursor
fetch next from TableCursor into @.COLUMN
while @.@.fetch_status = 0
begin
set @.SQL1 = '
insert into tbl_audit (
a_table,
a_year,
a_version,
a_cost_centre,
a_account,
a_column,
a_old_value,
a_new_value,
a_date,
a_user )
select
''tbl_estimates'' as a_table,
a.est_year,
a.est_version,
a.est_cost_centre,
a.est_account,
'''+@.COLUMN+''' as est_column,
b.'+@.COLUMN+' as est_old_value,
a.'+@.COLUMN+' as est_new_value,
'''+cast(@.TIME as varchar)+''' as est_date,
'''+@.USER+''' as est_user
from inserted a
inner join tbl_estimates b on
a.est_year = b.est_year and
a.est_version = b.est_version and
a.est_cost_centre = b.est_cost_centre and
a.est_account = b.est_account
where
a.'+@.COLUMN+' != b.'+@.COLUMN
exec sp_executesql @.SQL1
fetch next from TableCursor into @.COLUMN
end
close TableCursor
deallocate TableCursor
I then tried to insert the data from inserted into a #temp table, but again
I think I would need to use exec sp_executesql to do this as surely the
#temp_table would have to be #temp_table+'@.USERNAME' otherwise would there
not be possibly duplicated data if other people are editing the table and
causing the trigger to fire at the same time....or maybe not?
Thanks for reading that! Can anyone offer any ideas as to how I might get
round this, or even a totally different approach if the one I've chosen is
just plain daft.
I thought I was being clever, but maybe thats where Im going wrong :)
Many thanks.
Steve'oNever put cursors in triggers. They just turn your set-based updates
into slow, inefficient and over-complex row-by-row operations.
Dynamic SQL is usually a bad idea too. The security implications of
dynamic code make it particularly unsuitable for audit processes. A
better method to keep this "generic" IMO is to generate static trigger
code automatically at design time using the information_schema.
Your trigger code can look something like the following, for example.
CREATE TRIGGER trg_foo_audit ON foo FOR INSERT, UPDATE, DELETE
AS
INSERT INTO audit_table (col1, col2, ...)
SELECT D.col1, D.col2, ...
FROM Inserted AS I
JOIN Deleted AS D
ON I.key_col = D.key_col
WHERE COALESCE(I.col1,'') <> COALESCE(D.col1,'')
AND COALESCE(I.col2,'') <> COALESCE(D.col2,'')
Hope this helps.
David Portas
SQL Server MVP
--|||Hi David, thanks for the reply!
With the dynamic sql I was just passing the column name and the username of
the person who update the data, all the variables are sql functions, how is
this bad security? Im not running the alter trigger statement each time,
sorry if I forgot to change that line, once I've written the trigger it
should stay static, with just the variables changing within the dynamic @.SQL
1
When you said:
>A
> better method to keep this "generic" IMO is to generate static trigger
> code automatically at design time using the information_schema.
Is that not what I had done? Using a curosr to loop through the
information.schema?
The slowness is an issue, it does make it much slower, as I've managed to
get a bit more working and can see its not good as it takes 1 second between
updates now.
From what your saying though, I have to write a long trigger per table, as
there are 20-40 columns which could be updated, so truck loads of OR..OR..OR
.
I was trying to keep a short single trigger which could quickly be re-used
if other tables need auditing.
Thanks again!
Steve.
"David Portas" wrote:
> Never put cursors in triggers. They just turn your set-based updates
> into slow, inefficient and over-complex row-by-row operations.
> Dynamic SQL is usually a bad idea too. The security implications of
> dynamic code make it particularly unsuitable for audit processes. A
> better method to keep this "generic" IMO is to generate static trigger
> code automatically at design time using the information_schema.
> Your trigger code can look something like the following, for example.
> CREATE TRIGGER trg_foo_audit ON foo FOR INSERT, UPDATE, DELETE
> AS
> INSERT INTO audit_table (col1, col2, ...)
> SELECT D.col1, D.col2, ...
> FROM Inserted AS I
> JOIN Deleted AS D
> ON I.key_col = D.key_col
> WHERE COALESCE(I.col1,'') <> COALESCE(D.col1,'')
> AND COALESCE(I.col2,'') <> COALESCE(D.col2,'')
> Hope this helps.
> --
> David Portas
> SQL Server MVP
> --
>|||> how is this bad security?
Putting dynamic SQL in a trigger means that every user who has to
update the data must be granted update permissions on the tables,
including the audit table. In other words you lose all the potential
benefits of securing your data through stored procedures and your audit
trail could be wide open to abuse. Whether that's a real issue may
depend on how your middle tier is implemented but if your data access
was suitably abstracted thorugh middle tier code (on a web server, say)
you probably wouldn't need a trigger anyway - you could just call an SP
to do the work. As you've used triggers I'm guessing you haven't
implemented a middle tier in such a way as to prevent users getting
direct access to tables. If I'm wrong then I'm not sure why you would
use a trigger to do this at all.
Read the following article for more information on the issues around
dynamic code:
http://www.sommarskog.se/dynamic_sql.html
> Is that not what I had done? Using a curosr to loop through the
> information.schema?
Yes, but I'm proposing that you don't do it in a trigger. Instead, do
it at *design time* in order to generate the (static) trigger code.
David Portas
SQL Server MVP
--|||I've been having problems replying all day, "MS is having problems with your
request".
Thanks David, for taking the time to explain what to most people is probably
obvious. I guess this is the problem with developing solutions with no
formal training, after a year the essential basics are missing and I've got
bad habbits, I shall read that link!
Could I then in theory use the after update trigger to fire a sproc which
does the updating of the tables, would this work? In that the user does not
have permission on the audit table, but the sproc is run by an account which
can only update the audit table and is NOT on the windows network (we use
windows authentication).
Or should I be reading more on the security chapters now too :) (I think I
know the answer to that one).
Currently there is no middle tier, just Access 2000 projects (.adp). Some
of our users are on dialup vpn connections, so server side work seemed to
make sense at the time.
"David Portas" wrote:
> Putting dynamic SQL in a trigger means that every user who has to
> update the data must be granted update permissions on the tables,
> including the audit table. In other words you lose all the potential
> benefits of securing your data through stored procedures and your audit
> trail could be wide open to abuse. Whether that's a real issue may
> depend on how your middle tier is implemented but if your data access
> was suitably abstracted thorugh middle tier code (on a web server, say)
> you probably wouldn't need a trigger anyway - you could just call an SP
> to do the work. As you've used triggers I'm guessing you haven't
> implemented a middle tier in such a way as to prevent users getting
> direct access to tables. If I'm wrong then I'm not sure why you would
> use a trigger to do this at all.
> Read the following article for more information on the issues around
> dynamic code:
> http://www.sommarskog.se/dynamic_sql.html
>
> Yes, but I'm proposing that you don't do it in a trigger. Instead, do
> it at *design time* in order to generate the (static) trigger code.
> --
> David Portas
> SQL Server MVP
> --
>|||Sorry, me again, having gone through your posts again Im
your suggesting. ie
Automatically? I don't understand, design time is surely now, how would I
generate it automatically now and why?
Surely automatically implies at run-time, not design time?
Your example of :
INSERT INTO audit_table (col1, col2, ...)
SELECT D.col1, D.col2, ...
FROM Inserted AS I
JOIN Deleted AS D
ON I.key_col = D.key_col
WHERE COALESCE(I.col1,'') <> COALESCE(D.col1,'')
AND COALESCE(I.col2,'') <> COALESCE(D.col2,'')
How does this account for possibly more than 1 table with 20-40 columns
each? Surely this method means you have to specifically name ALL the column
s
which could be affected, and you have to have an audit table per source tabl
e
or you have a massive audit table which has most of its columns blank, as in
this example the columns need to match (insert into blah select blah).
The route I was looking at meant the only 1 audit table and only 1 trigger
no matter how many tables are to be audited (just change the source table
name and it will work).
My audit table only had one column for the column name, and the column name
was used as actual data rather than the crosstab look of boat loads of
columns to fit every circumstance.
Apologies, as Im sure I must have simply missunderstood, is there any chance
of a more detailed explanation of what "automatically" means and how your
example accounts for >1 table and loads of columns?
Thanks again.
Steve
"Steve'o" wrote:
> I've been having problems replying all day, "MS is having problems with yo
ur
> request".
> Thanks David, for taking the time to explain what to most people is probab
ly
> obvious. I guess this is the problem with developing solutions with no
> formal training, after a year the essential basics are missing and I've go
t
> bad habbits, I shall read that link!
> Could I then in theory use the after update trigger to fire a sproc which
> does the updating of the tables, would this work? In that the user does n
ot
> have permission on the audit table, but the sproc is run by an account whi
ch
> can only update the audit table and is NOT on the windows network (we use
> windows authentication).
> Or should I be reading more on the security chapters now too :) (I think
I
> know the answer to that one).
> Currently there is no middle tier, just Access 2000 projects (.adp). Some
> of our users are on dialup vpn connections, so server side work seemed to
> make sense at the time.
> "David Portas" wrote:
>|||Steve'o wrote:
> Sorry, me again, having gone through your posts again Im
at
> your suggesting. ie
>
>
> Automatically? I don't understand, design time is surely now, how would I
> generate it automatically now and why?
> Surely automatically implies at run-time, not design time?
>
What I imagine he's suggesting is that you move your automatic discovery
of table columns out of the trigger and into some code which writes a
script to create the triggers. Your triggers currently are operating
inefficiently because they are written to dynamic discover information
(the columns) at runtime which isn't dynamic at runtime. Every single
time the triggers fire, they suffer a penalty by having to rediscover
the same columns over and over again.
What's preferable is to write some small program or somewhat less small
SQL script which reads the information about your tables and their
columns from INFORMATION_SCHEMA views or other system info source and
then THAT would assemble and output a SQL script containing dynamically
generated CREATE TRIGGER instructions. The trigger code itself would
then be static and efficient because you've done the column discovery
externally.
For an example you can look at our OmniAudit program to create audit
triggers (hey, I did tell you how to do it yourself first). Just looking
at the triggers created by the trial version will give you some ideas
for your own trigger-generating process.
http://www.krell-software.com/omniaudit
Steve Troxell
http://www.krell-software.com
Audit trail - Sql Server 2000
Greetings to all.
We are building up a new application using SQL Server 2000 & .NET.
Need to build basic “Audit trail” process, in which need to store {User_
Id/
DateTime/ Status/Process Name}.
What’s the best strategy for implementing “Audit trail” in SQL Server
2000
Applications?
Thanks for your time.I'm assuming that "Status" and "Process Name" are not columns in tables, but
rather logical names in your application. The simplest way is to use
something like .NET Enterprise Library or even build your own simple logic
in your application to keep trace of this.
Typically, in the database, if you want to track changes made to your
tables, you use triggers, but in your case, I think you want to track what
"functions" a user is using?
--
HTH,
SriSamp
Email: srisamp@.gmail.com
Blog: http://blogs.sqlxml.org/srinivassampath
URL: http://www32.brinkster.com/srisamp
"SSUK" <SSUK@.newsgroups.nospam> wrote in message
news:7C422ADD-588A-4B34-BD0B-727C6F625BF9@.microsoft.com...
> Hi,
> Greetings to all.
> We are building up a new application using SQL Server 2000 & .NET.
> Need to build basic "Audit trail" process, in which need to store
> {User_Id/
> DateTime/ Status/Process Name}.
> What's the best strategy for implementing "Audit trail" in SQL Server 2000
> Applications?
> Thanks for your time.
>|||Thanks for your response. Let me explain my situation once again:-
e.g. I have a screen & different status of order on that screen i.e.
Created/Billed/Processed/Manufactured/Shipped.
What I want to do is, I want to record, which user had changed above order
status at what time?
Which means for each record, I want to build an Audit Trail?
I am aware that ,I can create my own Audit table & can create DB Triggers
for recording details ,or stored Proc can do same for me…
I am looking for what’s the best strategy to build an Audit trail.
Thank you very much for your precious time & Valuable inputs.
Regards.
"SriSamp" wrote:
> I'm assuming that "Status" and "Process Name" are not columns in tables, b
ut
> rather logical names in your application. The simplest way is to use
> something like .NET Enterprise Library or even build your own simple logic
> in your application to keep trace of this.
> Typically, in the database, if you want to track changes made to your
> tables, you use triggers, but in your case, I think you want to track what
> "functions" a user is using?
> --
> HTH,
> SriSamp
> Email: srisamp@.gmail.com
> Blog: http://blogs.sqlxml.org/srinivassampath
> URL: http://www32.brinkster.com/srisamp
> "SSUK" <SSUK@.newsgroups.nospam> wrote in message
> news:7C422ADD-588A-4B34-BD0B-727C6F625BF9@.microsoft.com...
>
>|||I would suggest triggers and audit tables. Try to keep the trigger as light
as possible. What kind of load (# of users, volume of data, etc.) does your
system have?
Alain Quesnel
alainsansspam@.logiquel.com
www.logiquel.com
"SSUK" <SSUK@.newsgroups.nospam> wrote in message
news:ADC87BEF-D824-49DD-B840-304CD99C332D@.microsoft.com...
> Thanks for your response. Let me explain my situation once again:-
> e.g. I have a screen & different status of order on that screen i.e.
> Created/Billed/Processed/Manufactured/Shipped.
> What I want to do is, I want to record, which user had changed above order
> status at what time?
> Which means for each record, I want to build an Audit Trail?
> I am aware that ,I can create my own Audit table & can create DB Triggers
> for recording details ,or stored Proc can do same for me.
> I am looking for what's the best strategy to build an Audit trail.
> Thank you very much for your precious time & Valuable inputs.
> Regards.
>
> "SriSamp" wrote:
>|||Hi,
Apart from Trigger & Audit table ,does sql server comes with default
keywords like AUdit ? Or any other Standard/better approach to hadle it ?
Oracle allows us to audit data using the ‘AUDIT’ command. For example,
‘AUDIT DELETE ON my_table;’
Regards
"Alain Quesnel" wrote:
> I would suggest triggers and audit tables. Try to keep the trigger as ligh
t
> as possible. What kind of load (# of users, volume of data, etc.) does you
r
> system have?
> --
> Alain Quesnel
> alainsansspam@.logiquel.com
> www.logiquel.com
>
> "SSUK" <SSUK@.newsgroups.nospam> wrote in message
> news:ADC87BEF-D824-49DD-B840-304CD99C332D@.microsoft.com...
>
>|||Hi,
In terms of Users/Load ,it's 50 -75 users using it at one point.
"Alain Quesnel" wrote:
> I would suggest triggers and audit tables. Try to keep the trigger as ligh
t
> as possible. What kind of load (# of users, volume of data, etc.) does you
r
> system have?
> --
> Alain Quesnel
> alainsansspam@.logiquel.com
> www.logiquel.com
>
> "SSUK" <SSUK@.newsgroups.nospam> wrote in message
> news:ADC87BEF-D824-49DD-B840-304CD99C332D@.microsoft.com...
>
>|||I don't think there are any default tools, but I'd personally steer
clear of triggers. While they're great at doing what they do, and lend
themselves naturally to an audit trail, I've found the main drawback of
them is that you can't see them. When someone else comes to maintain
your application, they would have to know that the triggers are there.
I know that with good documentation and skilled staff this would appear
simple, but you can't generally count on either.
My prefered method (given this drawback of triggers) is to write a
sufficiently generic auditing stored procedure, then within your insert
/ update / delete stored procs call this method. This has the advantage
that your auditing behaviour is centralised, and therefore if you need
to change it you won't need to iterate through the triggers, but also
that the operation is visible to anyone else who needs to pick up your
app.
I suspect this is a sufficiently controversial suggestion to get
utterly thrashed on here now, so... off you go guys, above the belt
with the punches please.|||Thanks for your reply.I agree to your suggestion to quite some extent.But i
am wondering , do we have an better way / inbuilt way of handling this ? Or
any more standard way (Best practise) of handling this ?
"Will" wrote:
> I don't think there are any default tools, but I'd personally steer
> clear of triggers. While they're great at doing what they do, and lend
> themselves naturally to an audit trail, I've found the main drawback of
> them is that you can't see them. When someone else comes to maintain
> your application, they would have to know that the triggers are there.
> I know that with good documentation and skilled staff this would appear
> simple, but you can't generally count on either.
> My prefered method (given this drawback of triggers) is to write a
> sufficiently generic auditing stored procedure, then within your insert
> / update / delete stored procs call this method. This has the advantage
> that your auditing behaviour is centralised, and therefore if you need
> to change it you won't need to iterate through the triggers, but also
> that the operation is visible to anyone else who needs to pick up your
> app.
> I suspect this is a sufficiently controversial suggestion to get
> utterly thrashed on here now, so... off you go guys, above the belt
> with the punches please.
>|||I don't know of any in-built methods, I'm pretty sure there aren't any.
However the information you'll be recording is usually fairly specific
to your app. My advice would be either to maintain auditing in the
stored procs, or, depending on the structure of your web app you could
enhance your data access layer to add a custom audit module that
controls all the auditing.
Personally I prefer keeping out of SQL server based auditing for
several reasons:
1) you can more easily re-use your auditing code
2) you can more easily audit to a centralised database for several apps
3) by auditing to a separate database you are not having to back up
tracing data along with your actual app data (though you do have to
consider re-synchronisation when restoring)
4) you are able to capture web app info more easily (e.g. if you're
using forms based authentication you can capture usernames, or you can
capture the page that the update originated from).
5) the auditing is then done for your application, rather than on your
db. This has the advantage that if someone else makes a custom update
to your database, you can't accuse the application of it. On the other
hand, you don't have the traceability of what was done (but personally
I prefer the idea that your app only audits things that it does).
6) the auditing is more visible and more maintainable
I could probably come up with more reasons, but this should indicate my
preference.
Perhaps someone else could recommend some off the shelf plug-ins for
this?
Will|||Just to give you a couple of more options to evaluate:
Have a server-side profiler trace going in which you capture the relevant ev
ents.
Use some of the 3:rd party (transaction) log reader tools, where some has ex
plicit audit
capabilities. The transaction log contains information about all modificatio
ns and some of these
tools can also log SELECT using a profiler trace in conjunction with the tra
nsaction log. I've
listed some of these log reader tools on my links page:
http://www.karaszi.com/SQLServer/links.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"SSUK" <SSUK@.newsgroups.nospam> wrote in message
news:7C422ADD-588A-4B34-BD0B-727C6F625BF9@.microsoft.com...
> Hi,
> Greetings to all.
> We are building up a new application using SQL Server 2000 & .NET.
> Need to build basic "Audit trail" process, in which need to store {User_Id
/
> DateTime/ Status/Process Name}.
> What's the best strategy for implementing "Audit trail" in SQL Server 2000
> Applications?
> Thanks for your time.
>
Friday, February 24, 2012
attribute key was not found error
everytime i try to process my cube i get this error:
Errors in the OLAP storage engine: The attribute key cannot be found:
Table: dbo_MCSFinFactData, Column: InvoiceDateDimensionID, Value: 15.
Errors in the OLAP storage engine: The record was skipped because the attribute key was not found.
Attribute: InvoiceDateDimensionID 8 of Dimension: InvoiceDate Fiscal Year 2 from Database:
SRDBAnalysis, Cube: MCSFinancial, Measure Group: MCSFinancial, Partition: MCSFinancial, Record: 10.
I have checked the dimension table and the record with an invoicedatedimensionid value of 15 exists. there are also many records in the fact table that use an invoicedatedimensionid of 15. if the record exists in both tables, why does it say the attribute key was not found?
i have resolved this, only i dont understand why my actions resolved the issue and was hoping someone could explain.
in the error( in the above post) you see the attribute was "invoiceDate fiscal year 2". i just processed the attribute manually, and had to do the same with a handfull of other attributes, and now the cube works. why is this? also , is there a quicker way, as id rather not have to process 20 - 30 attributes manually each time there is a problem!
|||Here is another thread on the same matter
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1198671&SiteID=1
The ultimate resolution should be fixing referential integrity problems in relational database and not processing dimensions in different order.
BTW, you cannot process a single attribute by itself
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Analysis Services 2005 is little bit more strict when comapred to AS2000 in the way it is detecting inconsistencies in relational database. If you run SQL Profiler you'd mention that it is also uses different strategy when querying relational database during processing.
Every attribute is processed separately and in case of AS2000 you'd see a one query per dimension, in AS2005 you see multiple SQL queries sent- one per dimension attribute.
See this paper describing new processing architechture: http://msdn2.microsoft.com/en-us/library/ms345142.aspx
As for detecting inconsistensies in relational database, it could be tricky. For instance, the reason Analysis Server wouldnt find a key for attribute member is; you allowed for processing ignore records with repeated key. Set KeyDuplicate to ReportAndStop. In fact set ReportAndStop for every setting in ErrorConfiguration for your processing command.
Here is whitepaper talks about setting error configuration http://msdn2.microsoft.com/en-us/library/ms345138.aspx
And it is also important to review new dimension stucture and see what becomes a source for attribute key column/s and what it is name.
Migration process might have some quirks in some complex cases mapping AS2000 database to AS2005. As for Analysis Server detecting missing keys - every time I run into missing key case it is always comes down to some modeling or data issues.
Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Attribut key not found vs Characters
Hello,
When I process one of my dimension, I have an error message "attribut key not found".
It's not link with records present in fact table and not in dimension table, but with the value of the attributs which contain symbols.
To be accurate I integrate first data from a database DB2 AS400 to sql server 2005 (with provider ADO.NET ODBC).
In the database DB2 AS400 the data could have different languages (chinese, japanese,...) but so far it is not necessary to see data with the correct language (moreover the database is not setup in Unicode and 1 specific language = 1 CCSID value...).
So I could have a client name with symbols in the name like :""
I think this is what SSAS doesn't like when I process the dimension...
When I set the KeyNotFound property of the dimension to "Ignore Error", I can after process my dimension, but when I browse it, I have an error ("the server sent an unrecognizable response").
Do you have any idea to how to deal with this error?
I think about eliminating the symbols like "" to begin, but I don't know if there are better solutions (and less dirty...).
Thanks,
Guillaume
There are two properties on the attribute's key columns you should check: Collation and InvalidXmlCharacters. The first will set the character set and case sensitivity for the column in question. When not set at the column level it inherits from the parent object, usually all the way up to the default settings specified when the OLAP server is installed. This should allow the server to understand when two different character codes represent equivalent or different characters. The second property, InvalidXmlCharacters, determines the behavior of the server when it encounters characters that are not valid in XML without some encoding. For performance reasons, the server will not check for the presence of such characters unless you set the InvalidXmlCharacters property. As a result if such invalid characters exist, the response the server sends to the client will not be valid XML.|||Thanks for you reply, it works now!
I need to ignore the "attribut key not found" error to be able to process the dimension, but thanks to the option InvalidXmlCharacters set to "remove" I can browse my attributs.
Guillaume
Thursday, February 16, 2012
attaching system database
I would like to ask if system database from other server can be
attached to other server. So basically, the process is, the existing
system database(including user databases) will be replaced by other
set of database(system and user databases) the new set of the
databases are just located in a separate drive in a cluster
environment. Will this procedure can apply?
http://support.microsoft.com/kb/224071
I am not sure if this will work. Just need to verify.
Thanks.
This will work. In some cases, you can simply copy the MDF/LDF files on the
same location where the system database files exist while the SQL Server
service is stopped. Once it starts, it should see those files. I used this
approach when I needed to set the collation settings for my tempdb. Instead
of rebuilding the master database, I just copied the model database files
from one server which has the same collation settings I need for my tempdb
and it worked
"Bry" <fqies_bry@.yahoo.com> wrote in message
news:1186108853.215054.30080@.j4g2000prf.googlegrou ps.com...
> Hi,
> I would like to ask if system database from other server can be
> attached to other server. So basically, the process is, the existing
> system database(including user databases) will be replaced by other
> set of database(system and user databases) the new set of the
> databases are just located in a separate drive in a cluster
> environment. Will this procedure can apply?
> http://support.microsoft.com/kb/224071
> I am not sure if this will work. Just need to verify.
> Thanks.
>
|||What if I have to use the other drive where they put the database
files? like I will modify the SQL(as KB said) to modify the drive of
the system database. will there be implications/problems?
|||I was about to say this...this is very important as your backups will not
work
"Jonathan Psaila-Depasquale" <nospam@.nospamplease.com> wrote in message
news:OLlqTka1HHA.3916@.TK2MSFTNGP02.phx.gbl...[vbcol=seagreen]
> Hi,
> Make sure that your destination SQL Server instance is patched up to the
> same build as the instance from which you are taking the system databases.
> Jonathan
>
> Bry wrote:
attaching system database
I would like to ask if system database from other server can be
attached to other server. So basically, the process is, the existing
system database(including user databases) will be replaced by other
set of database(system and user databases) the new set of the
databases are just located in a separate drive in a cluster
environment. Will this procedure can apply?
http://support.microsoft.com/kb/224071
I am not sure if this will work. Just need to verify.
Thanks.This will work. In some cases, you can simply copy the MDF/LDF files on the
same location where the system database files exist while the SQL Server
service is stopped. Once it starts, it should see those files. I used this
approach when I needed to set the collation settings for my tempdb. Instead
of rebuilding the master database, I just copied the model database files
from one server which has the same collation settings I need for my tempdb
and it worked
"Bry" <fqies_bry@.yahoo.com> wrote in message
news:1186108853.215054.30080@.j4g2000prf.googlegroups.com...
> Hi,
> I would like to ask if system database from other server can be
> attached to other server. So basically, the process is, the existing
> system database(including user databases) will be replaced by other
> set of database(system and user databases) the new set of the
> databases are just located in a separate drive in a cluster
> environment. Will this procedure can apply?
> http://support.microsoft.com/kb/224071
> I am not sure if this will work. Just need to verify.
> Thanks.
>|||What if I have to use the other drive where they put the database
files? like I will modify the SQL(as KB said) to modify the drive of
the system database. will there be implications/problems?|||Hi,
Make sure that your destination SQL Server instance is patched up to the
same build as the instance from which you are taking the system databases.
Jonathan
Bry wrote:
> Hi,
> I would like to ask if system database from other server can be
> attached to other server. So basically, the process is, the existing
> system database(including user databases) will be replaced by other
> set of database(system and user databases) the new set of the
> databases are just located in a separate drive in a cluster
> environment. Will this procedure can apply?
> http://support.microsoft.com/kb/224071
> I am not sure if this will work. Just need to verify.
> Thanks.
>|||I was about to say this...this is very important as your backups will not
work
"Jonathan Psaila-Depasquale" <nospam@.nospamplease.com> wrote in message
news:OLlqTka1HHA.3916@.TK2MSFTNGP02.phx.gbl...[vbcol=seagreen]
> Hi,
> Make sure that your destination SQL Server instance is patched up to the
> same build as the instance from which you are taking the system databases.
> Jonathan
>
> Bry wrote:
attaching system database
I would like to ask if system database from other server can be
attached to other server. So basically, the process is, the existing
system database(including user databases) will be replaced by other
set of database(system and user databases) the new set of the
databases are just located in a separate drive in a cluster
environment. Will this procedure can apply?
http://support.microsoft.com/kb/224071
I am not sure if this will work. Just need to verify.
Thanks.This will work. In some cases, you can simply copy the MDF/LDF files on the
same location where the system database files exist while the SQL Server
service is stopped. Once it starts, it should see those files. I used this
approach when I needed to set the collation settings for my tempdb. Instead
of rebuilding the master database, I just copied the model database files
from one server which has the same collation settings I need for my tempdb
and it worked
"Bry" <fqies_bry@.yahoo.com> wrote in message
news:1186108853.215054.30080@.j4g2000prf.googlegroups.com...
> Hi,
> I would like to ask if system database from other server can be
> attached to other server. So basically, the process is, the existing
> system database(including user databases) will be replaced by other
> set of database(system and user databases) the new set of the
> databases are just located in a separate drive in a cluster
> environment. Will this procedure can apply?
> http://support.microsoft.com/kb/224071
> I am not sure if this will work. Just need to verify.
> Thanks.
>|||What if I have to use the other drive where they put the database
files? like I will modify the SQL(as KB said) to modify the drive of
the system database. will there be implications/problems?|||Hi,
Make sure that your destination SQL Server instance is patched up to the
same build as the instance from which you are taking the system databases.
Jonathan
Bry wrote:
> Hi,
> I would like to ask if system database from other server can be
> attached to other server. So basically, the process is, the existing
> system database(including user databases) will be replaced by other
> set of database(system and user databases) the new set of the
> databases are just located in a separate drive in a cluster
> environment. Will this procedure can apply?
> http://support.microsoft.com/kb/224071
> I am not sure if this will work. Just need to verify.
> Thanks.
>|||I was about to say this...this is very important as your backups will not
work
"Jonathan Psaila-Depasquale" <nospam@.nospamplease.com> wrote in message
news:OLlqTka1HHA.3916@.TK2MSFTNGP02.phx.gbl...
> Hi,
> Make sure that your destination SQL Server instance is patched up to the
> same build as the instance from which you are taking the system databases.
> Jonathan
>
> Bry wrote:
>> Hi,
>> I would like to ask if system database from other server can be
>> attached to other server. So basically, the process is, the existing
>> system database(including user databases) will be replaced by other
>> set of database(system and user databases) the new set of the
>> databases are just located in a separate drive in a cluster
>> environment. Will this procedure can apply?
>> http://support.microsoft.com/kb/224071
>> I am not sure if this will work. Just need to verify.
>> Thanks.
Monday, February 13, 2012
Attaching Database does not name database correctly
I have a situation where I need to rename a database and its physical files
via a scripting process. I am having a problem in the last step where I am
reattaching the renamed files with the new database name...the problem is
that no matter what I attempt to rename the database to it keeps the old
database name in the properties. Incidently this even happens using
Management Studio...if you detach the database, rename the data and log
file, and then reattach it then check the properties of the database under
the files tab and you will see that it kept the original database name.
Example:
Original filename 00002014
1. Detach the database with sp_detach_db - Works fine
2. Set the permissions on the data and log file with cacls.exe - Works fine
3. Create a renamed backup copy of the original data and log files using
xp_cmdshell Copy - Works fine
4. Create a renamed data and log file that is the same as the new database
name will be. - Works fine
5. Create a database for Attaching - Weirdness occurs:
CREATE DATABASE [90002014] ON
( FILENAME = N'D:\Microsoft SQL Server\90002014.mdf' ),
( FILENAME = N'D:\Microsoft SQL Server\90002014_log.ldf' )
FOR ATTACH
The problem that happens is that even though the database is created and
attached correctly.the name when I query "master.dbo.sysaltfiles" is the OLD
database name. So instead of 90002014 it was still the old 00002014 but the
database name on the database in management studio is the new name 90002014.
It is crazy! How do I change the actual name of the database when I do the
CREATE DATABASE FOR ATTACH Procedure. (or for that matter using the
SP_ATTACH_DB stored proc, both have the same problem)
RonYou must be talking about the LOGICAL names of the files not the database
itself. This is normal. To change the LOGICAL names you use the ALTER
DATABASE command with the MODIFY FILE option to change the Logical name to a
new one.
Andrew J. Kelly SQL MVP
"RSH" <way_beyond_oops@.yahoo.com> wrote in message
news:uDbx7jKPGHA.2300@.TK2MSFTNGP15.phx.gbl...
> Here are the steps I have taken:
> I have a situation where I need to rename a database and its physical
> files via a scripting process. I am having a problem in the last step
> where I am reattaching the renamed files with the new database name...the
> problem is that no matter what I attempt to rename the database to it
> keeps the old database name in the properties. Incidently this even
> happens using Management Studio...if you detach the database, rename the
> data and log file, and then reattach it then check the properties of the
> database under the files tab and you will see that it kept the original
> database name.
>
> Example:
> Original filename 00002014
>
> 1. Detach the database with sp_detach_db - Works fine
> 2. Set the permissions on the data and log file with cacls.exe - Works
> fine
> 3. Create a renamed backup copy of the original data and log files using
> xp_cmdshell Copy - Works fine
> 4. Create a renamed data and log file that is the same as the new
> database name will be. - Works fine
> 5. Create a database for Attaching - Weirdness occurs:
>
> CREATE DATABASE [90002014] ON
> ( FILENAME = N'D:\Microsoft SQL Server\90002014.mdf' ),
> ( FILENAME = N'D:\Microsoft SQL Server\90002014_log.ldf' )
> FOR ATTACH
>
> The problem that happens is that even though the database is created and
> attached correctly.the name when I query "master.dbo.sysaltfiles" is the
> OLD database name. So instead of 90002014 it was still the old 00002014
> but the database name on the database in management studio is the new name
> 90002014.
>
> It is crazy! How do I change the actual name of the database when I do
> the CREATE DATABASE FOR ATTACH Procedure. (or for that matter using the
> SP_ATTACH_DB stored proc, both have the same problem)
>
> Ron
>|||That was it...THANKS!
Ron
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:u2Ogc5KPGHA.3924@.TK2MSFTNGP14.phx.gbl...
> You must be talking about the LOGICAL names of the files not the database
> itself. This is normal. To change the LOGICAL names you use the ALTER
> DATABASE command with the MODIFY FILE option to change the Logical name to
> a new one.
> --
> Andrew J. Kelly SQL MVP
>
> "RSH" <way_beyond_oops@.yahoo.com> wrote in message
> news:uDbx7jKPGHA.2300@.TK2MSFTNGP15.phx.gbl...
>