Friday, March 30, 2012

Peer-to-peer replication and Oracle real application clusters

Hello,
I have a question I hope can be answered fully.
Some time last year, I attended a Microsoft presentation in Dallas and a sql
server specialist from microsoft (I think his name was srikan) mentioned
during Q&A that SQL Server 2005 has peer-to-peer replication which is a
comparable solution to oracle's real application cluster with some
restriction.
I recently attended another presentation also at Microsoft but this time by
their partner that are suppose to be SQL server experts. This speaker said
that it is not a comparable solution. He gave several reasons and they
seemed to make sense but then again, a microsoft technical specialist said
differently last year.
Did the product change? Who should I listen to? My co-workers say I should
listen to microsoft because they created the product but the other guy seems
to know a lot also and said he used to be an oracle dba. I am confused.
Anybody can help here? I don't want to ask Oracle because they want a lot of
information from me before they will answer and their newsgroup people are
quite unfriendly.
Thank you.
aK.
No, peer-to-peer replication is not a replacement to RAC. Unfortunately,
the TS would be wrong in a literal interpretation. If you look at the basic
data flow, there is some similarity in the solution which disappears when
you start looking a lot deeper.
I don't know a huge amount about RAC, but I understand the basics. With RAC
you are essentially plugging N servers into an Oracle architecture that
access a single database. What you wind up with is theoretically pooling
all of your hardware resources together (memory, processors, network I/O)
which can be basically treated by an application as a single massive server.
They still point to a single database on the backend. You still have to
deal with all of the issues related to changes coming from multiple servers
into a single database which now has to resolve conflicting changes on the
fly and several other implementation related issues.
Peer-to-peer replication is a logical extension to bi-directional
transactional replication. This handles the data layer only. The
replication engine is used to replicate like it says "from everyone - to
everyone". There is ZERO conflict detection or resolution, so your data
changes must be partitioned. If two users were to change the same piece of
data on different servers at the same time, it would create a huge
synchronization issue.
What problem are you trying to solve?
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Angie" <NO_Angie_Kong_SPAM@.hotmail.com> wrote in message
news:ORao5BrMGHA.648@.TK2MSFTNGP14.phx.gbl...
> Hello,
> I have a question I hope can be answered fully.
> Some time last year, I attended a Microsoft presentation in Dallas and a
> sql server specialist from microsoft (I think his name was srikan)
> mentioned during Q&A that SQL Server 2005 has peer-to-peer replication
> which is a comparable solution to oracle's real application cluster with
> some restriction.
> I recently attended another presentation also at Microsoft but this time
> by their partner that are suppose to be SQL server experts. This speaker
> said that it is not a comparable solution. He gave several reasons and
> they seemed to make sense but then again, a microsoft technical specialist
> said differently last year.
> Did the product change? Who should I listen to? My co-workers say I should
> listen to microsoft because they created the product but the other guy
> seems to know a lot also and said he used to be an oracle dba. I am
> confused.
> Anybody can help here? I don't want to ask Oracle because they want a lot
> of information from me before they will answer and their newsgroup people
> are quite unfriendly.
> Thank you.
>
> aK.
>
sql

Peer-to-peer Replication Agent Error

Does anyone know why if I setup a p2p replication I get:
"Could not find a valid Agent Id for the subscription to Publisher
XXXServer, ...
Source: MSSQLServer, Error number: 21758"
This message only happens on the replication between SeverA to ServerB...
ServerB replicates to ServerA just fine...
Any help would be appreciated since the link to Microsoft help says the
error is not documented.
Thanks!
Can you post the exact steps you took to configure this? What you are
describing is not peer-to-peer replication, but seems to be bi-directional
transactional instead.
Mike
Mentor
Solid Quality Learning
http://www.solidqualitylearning.com
"Mick Kowitz" <mkowitz@.cinci.rr.com> wrote in message
news:f%wwf.14644$%D1.9161@.tornado.ohiordc.rr.com.. .
> Does anyone know why if I setup a p2p replication I get:
> "Could not find a valid Agent Id for the subscription to Publisher
> XXXServer, ...
> Source: MSSQLServer, Error number: 21758"
> This message only happens on the replication between SeverA to ServerB...
> ServerB replicates to ServerA just fine...
> Any help would be appreciated since the link to Microsoft help says the
> error is not documented.
> Thanks!
>
|||I actually got it working. The problem was when I was in the peer-to-peer
topology and Adding the SQL Server, I was trying to use Windows
Authentication to connect to the peer, but since I was in a WORKGROUP that
wasn't working. As soon as I made it connect with SQL Authentication it
worked...
I'm wondering if you have any ideas, though, on a new problem...
I have a table in the p2p replication that has an IDENTITY column as the key
and an on INSERT trigger to add a second record. When I replicate, it fails
telling me to turn NOT FOR REPLICATION to NO. I found the MS article
http://support.microsoft.com/?kbid=908711 on this. However, I can't set NOT
FOR REPLICATION to NO in a p2p situation as far as I can tell. It's easily
reproduced. Just create a simple table with primary key as an IDENTITY
column and then an ON INSERT trigger to add a second row. When you create
the publication and try to run the P2P Topology stuff it blows off telling
you that you can't do p2p when set to manual.
Any ideas?
Mick...
"Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
news:u9eJiGVFGHA.1028@.TK2MSFTNGP11.phx.gbl...
> Can you post the exact steps you took to configure this? What you are
> describing is not peer-to-peer replication, but seems to be bi-directional
> transactional instead.
> --
> Mike
> Mentor
> Solid Quality Learning
> http://www.solidqualitylearning.com
>
> "Mick Kowitz" <mkowitz@.cinci.rr.com> wrote in message
> news:f%wwf.14644$%D1.9161@.tornado.ohiordc.rr.com.. .
>
|||I actually got it working. The problem was when I was in the peer-to-peer
topology and Adding the SQL Server, I was trying to use Windows
Authentication to connect to the peer, but since I was in a WORKGROUP that
wasn't working. As soon as I made it connect with SQL Authentication it
worked...
I'm wondering if you have any ideas, though, on a new problem...
I have a table in the p2p replication that has an IDENTITY column as the key
and an on INSERT trigger to add a second record. When I replicate, it fails
telling me to turn NOT FOR REPLICATION to NO. I found the MS article
http://support.microsoft.com/?kbid=908711 on this. However, I can't set NOT
FOR REPLICATION to NO in a p2p situation as far as I can tell. It's easily
reproduced. Just create a simple table with primary key as an IDENTITY
column and then an ON INSERT trigger to add a second row. When you create
the publication and try to run the P2P Topology stuff it blows off telling
you that you can't do p2p when set to manual.
Any ideas?
Mick...
"Michael Hotek" <mike@.solidqualitylearning.com> wrote in message
news:u9eJiGVFGHA.1028@.TK2MSFTNGP11.phx.gbl...
> Can you post the exact steps you took to configure this? What you are
> describing is not peer-to-peer replication, but seems to be bi-directional
> transactional instead.
> --
> Mike
> Mentor
> Solid Quality Learning
> http://www.solidqualitylearning.com
>
> "Mick Kowitz" <mkowitz@.cinci.rr.com> wrote in message
> news:f%wwf.14644$%D1.9161@.tornado.ohiordc.rr.com.. .
>

Peer-to-Peer Replication (Programmatic Control and Setup)

I have reviewed the BOL documentation on how to configure Peer-to-Peer replication via T-SQL and how to use the Replication Wizard to implement replication.

What I would like to find out is how do I configure the peer-to-peer replication process to use an existing column on a table that contains a GUID instead of creating an extra column with a uniqueidentifier GUID value. When you use the Wizard each table article has this extra column added to it.

I don't seem to be able to find it in the books-on-line. Can some one point me to the correct article or BOL page.

Thank you.

...cordell...

I do not think that P2P replication adds a uniqueidentifier column to a published table. Transactional queued replicaiton and merge replication will add required columns to a table that you want to publish. Please confirm which type of replication you are trying to use.

A general suggestion is to use the UI to implement the replication scenario that you want, then use the scripting featues in the UI to see the syntax for the new column that was added.

i.e. Merge replication will add the following column to a table that is merge published...

[rowguid] [uniqueidentifier] ROWGUIDCOL NOT NULL CONSTRAINT

Check out this link for more details on how merge replcation adds a rowguid uniqueidentifier column if one does not exist -- http://msdn2.microsoft.com/en-us/library/aa256294(SQL.80).aspx

Hope this helps,

Tom

This posting is provided "AS IS" with no warranties, and confers no rights.

Peer2Peer Transactional Replication

Hi,
Anyone know why the view/ store procedure does not replicate from ServerA to
ServerB?
Thanks,
Kenny
They will be in place if you deploy your subscribers from a backup. If you
create the subscription manually you will need to create them on each node
as well.
Note that if you create a new view or stored procedure they will not be
replicated to each node.
http://www.zetainteractive.com - Shift Happens!
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Kenny" <Kenny@.discussions.microsoft.com> wrote in message
news:5CA78A4E-78B4-488C-BF83-2D4B88A1FCA9@.microsoft.com...
> Hi,
> Anyone know why the view/ store procedure does not replicate from ServerA
> to
> ServerB?
> Thanks,
> Kenny

peer-2-peer Replication

I am using Transactional publication with updatable subscriptions of
database In this I have to enable Peer-2-Peer Replication. How ever
this is disable any reason for this
Peer-to-peer can not be enabled if you are using queued updating or
immediate updating.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
<dineshasanka@.gmail.com> wrote in message
news:1142322592.136227.68820@.i40g2000cwc.googlegro ups.com...
>I am using Transactional publication with updatable subscriptions of
> database In this I have to enable Peer-2-Peer Replication. How ever
> this is disable any reason for this
>
sql

peer-2-peer Replication

I am using Transactional publication with updatable subscriptions of
database In this I have to enable Peer-2-Peer Replication. How ever
this is disable any reason for thisPeer-to-peer can not be enabled if you are using queued updating or
immediate updating.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
<dineshasanka@.gmail.com> wrote in message
news:1142322592.136227.68820@.i40g2000cwc.googlegroups.com...
>I am using Transactional publication with updatable subscriptions of
> database In this I have to enable Peer-2-Peer Replication. How ever
> this is disable any reason for this
>

peer-2-peer Replication

I am using Transactional publication with updatable subscriptions of
database In this I have to enable Peer-2-Peer Replication. How ever
this is disable any reason for thisPeer-to-peer can not be enabled if you are using queued updating or
immediate updating.
--
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
<dineshasanka@.gmail.com> wrote in message
news:1142322592.136227.68820@.i40g2000cwc.googlegroups.com...
>I am using Transactional publication with updatable subscriptions of
> database In this I have to enable Peer-2-Peer Replication. How ever
> this is disable any reason for this
>

Peer to Peer Replication Problem with BLOBs

I've set up a development peer to peer replication between 2 SQL Server 2005 on 2 Windows XP SP2 systems.

The distribution agent is configured to use UseOledbStreaming. When saving smaller BLOBs of approx 10000 KB all works fine. When I try to replicate larger data I get the following error message in the Synchronisation status window and in the history of the distributor to the subscriber tab in the replication monitor:

Der Verteilungs-Agent verwendet die OLE DB-Streamingoptimierung für die Replikation von BLOB-Daten (Binary Large Objects) mit mehr als 16384 Bytes. Der Schwellenwert für die Optimierung kann über die Befehlszeile mithilfe des -OledbStreamThreshold-Parameters festgelegt werden.
(Quelle: MSSQL_REPL, Fehlernummer: MSSQL_REPL21060)
Hilfe abrufen: http://help/MSSQL_REPL21060

I also tried different values of OledbStreamThreshold.

I need to be able to replicate BLOBs in size of up to 1 GB.

I'm writing the BLOB data in chunks to the table DB_BLOBData , that isn't an article for replication and at the end copy it to the destination table MyBLOBTable which is an article. I do this, because I noticed that the transaction log on the subscriber database becomes incredibly huge (>60 GB for maybe 50 1000KB BLOBs written in blocks size of 65536 bytes) when directly updating the row using BLOB.Write(...). I think, that for each write at least the whole BLOB contents is written to the transaction log.

The replication uses Stored procedures, Update using SCALL syntax.

The code of the SP:

...
IF @.action = 1
BEGIN
UPDATE MyBLOBTable
SET DocContent = (SELECT BLOB from DB_BLOBData where GuidId = @.id)
, DocSize = (SELECT DATALENGTH(BLOB) from DB_BLOBData where GuidId = @.id)
WHERE DocId = @.id
DELETE FROM DB_BLOBData WHERE GuidId = @.id
END

IF @.action = 0
BEGIN
IF @.Offset = 0
BEGIN
INSERT DB_BLOBData( GuidId, BLOB )
VALUES ( @.id, @.value )
END
ELSE
BEGIN
UPDATE DB_BLOBData
SET BLOB.Write( @.value , @.Offset , @.Length )
WHERE GuidId = @.id
END
END

SELECT @.ModDttm = modDttm
, @.ModUser = modUser
, @.ModClient = modClient
, @.ModAppl = modAppl
FROM MyBLOBTable
WHERE DocId = @.id

RETURN 0
END

TIA,

Hannoman

Hi Hannoman,

While my ancestors are from Germany, my German is severly lacking, thus my inability to decipher the error message , but you might look at the following in BOL:

ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/rpldata9/html/11061c08-661a-4386-8996-9685cd81de05.htm

Particuarly, the section that talks about LOB's. I don't know if that answers your question, but if it doesn't hopefully it will point you in the right direction.

Scott

|||

Hello Scott

I'm using transactional peer to peer replication.

The error message means something like

The distribution agent uses OLE DB-Streaming optimization for replicating BLOB data with size larger than 16384 bytes. The threshold for the optimizationcan be defined on the command line using the OledbStreamThreshold parameter.

(Source: MSSQL_REPL, Error number: MSSQL_REPL21060)
get help: http://help/MSSQL_REPL21060

The problem is, that this kind of error deadlocks the replication. As I don't know how to handle this, I have to delete the publications and subscriptions and set them up again. In an production environment this wouldn't be a solution.

Hannoman

|||Unfortunately this is a known bug for Peer to Peer in SQL 2005, as a workaround you can try setting -CommitBatchSize to 1.

Peer to Peer Replication Problem with BLOBs

I've set up a development peer to peer replication between 2 SQL Server 2005 on 2 Windows XP SP2 systems.

The distribution agent is configured to use UseOledbStreaming. When saving smaller BLOBs of approx 10000 KB all works fine. When I try to replicate larger data I get the following error message in the Synchronisation status window and in the history of the distributor to the subscriber tab in the replication monitor:

Der Verteilungs-Agent verwendet die OLE DB-Streamingoptimierung für die Replikation von BLOB-Daten (Binary Large Objects) mit mehr als 16384 Bytes. Der Schwellenwert für die Optimierung kann über die Befehlszeile mithilfe des -OledbStreamThreshold-Parameters festgelegt werden.
(Quelle: MSSQL_REPL, Fehlernummer: MSSQL_REPL21060)
Hilfe abrufen: http://help/MSSQL_REPL21060

I also tried different values of OledbStreamThreshold.

I need to be able to replicate BLOBs in size of up to 1 GB.

I'm writing the BLOB data in chunks to the table DB_BLOBData , that isn't an article for replication and at the end copy it to the destination table MyBLOBTable which is an article. I do this, because I noticed that the transaction log on the subscriber database becomes incredibly huge (>60 GB for maybe 50 1000KB BLOBs written in blocks size of 65536 bytes) when directly updating the row using BLOB.Write(...). I think, that for each write at least the whole BLOB contents is written to the transaction log.

The replication uses Stored procedures, Update using SCALL syntax.

The code of the SP:

...
IF @.action = 1
BEGIN
UPDATE MyBLOBTable
SET DocContent = (SELECT BLOB from DB_BLOBData where GuidId = @.id)
, DocSize = (SELECT DATALENGTH(BLOB) from DB_BLOBData where GuidId = @.id)
WHERE DocId = @.id
DELETE FROM DB_BLOBData WHERE GuidId = @.id
END

IF @.action = 0
BEGIN
IF @.Offset = 0
BEGIN
INSERT DB_BLOBData( GuidId, BLOB )
VALUES ( @.id, @.value )
END
ELSE
BEGIN
UPDATE DB_BLOBData
SET BLOB.Write( @.value , @.Offset , @.Length )
WHERE GuidId = @.id
END
END

SELECT @.ModDttm = modDttm
, @.ModUser = modUser
, @.ModClient = modClient
, @.ModAppl = modAppl
FROM MyBLOBTable
WHERE DocId = @.id

RETURN 0
END

TIA,

Hannoman

Hi Hannoman,

While my ancestors are from Germany, my German is severly lacking, thus my inability to decipher the error message , but you might look at the following in BOL:

ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/rpldata9/html/11061c08-661a-4386-8996-9685cd81de05.htm

Particuarly, the section that talks about LOB's. I don't know if that answers your question, but if it doesn't hopefully it will point you in the right direction.

Scott

|||

Hello Scott

I'm using transactional peer to peer replication.

The error message means something like

The distribution agent uses OLE DB-Streaming optimization for replicating BLOB data with size larger than 16384 bytes. The threshold for the optimizationcan be defined on the command line using the OledbStreamThreshold parameter.

(Source: MSSQL_REPL, Error number: MSSQL_REPL21060)
get help: http://help/MSSQL_REPL21060

The problem is, that this kind of error deadlocks the replication. As I don't know how to handle this, I have to delete the publications and subscriptions and set them up again. In an production environment this wouldn't be a solution.

Hannoman

|||Unfortunately this is a known bug for Peer to Peer in SQL 2005, as a workaround you can try setting -CommitBatchSize to 1.

Peer to peer replication on SQL 2000?

Hello,
reading about peer-to-peer transactional replication in SQL 2005
(http://msdn2.microsoft.com/ms152536.aspx), I would like to do the same
thing with 3 SQL 2000 servers. Of course peer-to-peer replication is not
available, but can anybody suggest a way to simulate this kind of
replication with SQL 2000?
thanks,
al.
AL,
the nearest thing is probably bi-directional transactional replication:
http://support.microsoft.com/default...b;en-us;820675
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Bi directional transactional replication only works between 2 nodes. The
closest you can get is merge, but this would require your publisher to be
always accessible. If it goes down, the other two nodes are out of luck.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23AE7C%23REGHA.3984@.TK2MSFTNGP14.phx.gbl...
> AL,
> the nearest thing is probably bi-directional transactional replication:
> http://support.microsoft.com/default...b;en-us;820675
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
|||Maybe I'm missing something it but according to the MS document, you can use
a central subscriber topology to use BiDirectional Transactional replication
to cater for more than two nodes?
Cheers,
Paul
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:%23P67ZKSEGHA.828@.TK2MSFTNGP10.phx.gbl...
> Bi directional transactional replication only works between 2 nodes. The
> closest you can get is merge, but this would require your publisher to be
> always accessible. If it goes down, the other two nodes are out of luck.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
> news:%23AE7C%23REGHA.3984@.TK2MSFTNGP14.phx.gbl...
>
|||Paul Ibison wrote:
> Maybe I'm missing something it but according to the MS document, you can use
> a central subscriber topology to use BiDirectional Transactional replication
> to cater for more than two nodes?
> Cheers,
> Paul
That's what I'm trying to do. Manually, as stated in that article. Still
working on it ;)
|||Well what do you know!
Last time I tried this I could not get it to work. Changes originating on
Server B would go to Server A and C but not back to B. Then them C would
loop back to A and go to B, and collision. Let me try to repro this again.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:OlJqkTSEGHA.1384@.TK2MSFTNGP11.phx.gbl...
> Maybe I'm missing something it but according to the MS document, you can
> use a central subscriber topology to use BiDirectional Transactional
> replication to cater for more than two nodes?
> Cheers,
> Paul
>
>
> "Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
> news:%23P67ZKSEGHA.828@.TK2MSFTNGP10.phx.gbl...
>
|||Hilary,
I just managed to let this work, but it's a mess of things to do
manually (including generating - and then modifying!! - stored
procedures that will handle inserts/updates/deletes on other servers).
If you follow the instructions, and start with a simple table, you get
it working. The problem is then using this for a large db...
I'm still evaluating if it's worth the effort, but this is the only
solution I found (thanks Paul) to simulate what under SQL 2005 is called
"peer-to-peer transactional replication".
If you or anybody else is using this kind of replication, I'd like to
hear your opinions.
bye,
al.

> Well what do you know!
> Last time I tried this I could not get it to work. Changes originating on
> Server B would go to Server A and C but not back to B. Then them C would
> loop back to A and go to B, and collision. Let me try to repro this again.
>
|||It absolutely can not be done. You wind up with endlessly looping
transactions.
Mike
Mentor
Solid Quality Learning
http://www.solidqualitylearning.com
"AL" <albeNOSPAM@.ihnetNOSPAM.it> wrote in message
news:uAJszyREGHA.376@.TK2MSFTNGP12.phx.gbl...
> Hello,
> reading about peer-to-peer transactional replication in SQL 2005
> (http://msdn2.microsoft.com/ms152536.aspx), I would like to do the same
> thing with 3 SQL 2000 servers. Of course peer-to-peer replication is not
> available, but can anybody suggest a way to simulate this kind of
> replication with SQL 2000?
> thanks,
> al.
|||Mike,
I like your explanation of avoiding closing the loop, but as there is only a
requirement for 3 servers (and the poster wants the nearest method in SQL
2000 to simulate peer-to-peer), why won't
ServerA <--> ServerB <--> ServerC
work? (you mention this in another reply ).
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Al,
it would help us to know if it is a requirement for all the nodes to
communicate with eachother (rather than having a simple chain). As you can
see from the thread, this is the crux of whether or not bidirectional TR is
applicable or not.
Rgds,
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
sql

Peer to Peer replication and cascading deletes

I was hoping someone could explain how cascading deletes work in Peer to Peer replication on SQL 2005 SP2, the child tables are also being replicated.

Did some of my own testing and here's the results:

Peer to Peer Replication Testing FK Cascading deletes Test 1 1 Create 2 tables with a foreign key dependency, both tables are being replicated. 2 Create FK relationship with the various option below. FK Options Behaviour SQL1 SQL2 Note FOR REPPLICATION Rows are removed as expected All rows are delete as with SQL1, not errors are generated. NOT FOR REPLICATION Rows are removed as expected All rows are delete as with SQL1, not errors are generated. Test 2 1 Create 2 tables with a foreign key dependency, only 1 table is replicated replicated. 2 Create FK relationship with the various option below. FK Options Behaviour SQL1 SQL2 Note FOR REPPLICATION Rows are removed as expected All rows are delete as with SQL1, not errors are generated. NOT FOR REPLICATION Rows are removed as expected The cascading delete failed, only the primary record was removed. This does not break replication as the table that retained the row is not replicated.

peer to peer replication

Hi,

I set up a peer to peer replication on servers, but only one side seem to be working. I am getting Log Reader Agent job error.

Unable to start execution of step 2 (reason: Error authenticating proxy domain\user, system error: Logon failure: unknown user name or bad password.). The step failed.

I am using the same windows account on both servers, why is one side working and the other side not?

thanks

doublecheck the account and password that starts the job, sqlserver and sqlagent service, and doublecheck their permissions on the machine.

Peer to Peer question.

BOL link:
ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/rpldata9/html/23e7e8c1-002f-4e69-8c99-d63e4100de64.htm
Howdy all. I was just reading up on this topic and have a question. It says
on this page (Topology with two participating databases) that reads will be
spread out between the two servers, thereby giving better performance. But it
doesnt say how that is accomplished. Does the app have to be designed a
specific way, like it would be if updates were going to go between the two
boxes in the same scenario?
TIA, ChrisR
Hi Chris - I would guess that they are talking about a load-balancing
middle-tier solution, but I can't see the exact link here. Can you post up
the topic header and I'll check out the page on the web.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
|||It's scale out read and write performance. Instead of having 100 users
banging away against one box you have two boxes and 50 of the 100 users
banging away against box 1 and the other 50 banging away against box 2. You
get improved performance because of reduced io contention, lock contention,
and improved concurrency.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
news:8EB2FFF2-B2A5-4F1A-A0AB-58552D3C212D@.microsoft.com...
> BOL link:
> ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/rpldata9/html/23e7e8c1-002f-4e69-8c99-d63e4100de64.htm
> Howdy all. I was just reading up on this topic and have a question. It
> says
> on this page (Topology with two participating databases) that reads will
> be
> spread out between the two servers, thereby giving better performance. But
> it
> doesnt say how that is accomplished. Does the app have to be designed a
> specific way, like it would be if updates were going to go between the two
> boxes in the same scenario?
> TIA, ChrisR
>
|||I understand why you would want it to be spread out, but I don't understand
how it gets spread put. Does the app need to be written a certain way?
"Hilary Cotter" wrote:

> It's scale out read and write performance. Instead of having 100 users
> banging away against one box you have two boxes and 50 of the 100 users
> banging away against box 1 and the other 50 banging away against box 2. You
> get improved performance because of reduced io contention, lock contention,
> and improved concurrency.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
> news:8EB2FFF2-B2A5-4F1A-A0AB-58552D3C212D@.microsoft.com...
>
>
|||replication [SQL Server], peer-to-peer replication/ Peer to Peer
Transactional Replication.
"Paul Ibison" wrote:

> Hi Chris - I would guess that they are talking about a load-balancing
> middle-tier solution, but I can't see the exact link here. Can you post up
> the topic header and I'll check out the page on the web.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
>
>
|||DML which occurs on one server is sent to each node in the peer-to-peer
topology. Each command has an originating server tag on it so it does not
loop back.
The recommendations for writing a client app which uses the peer-to-peer
topology is that all updates are directed at one server to prevent
conflicts, the tables are perfectly partitioned to prevent conflicts and all
systems are quiesced when you make schema changes.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
news:37CA311D-26D6-42A9-88B7-A1396C8CB730@.microsoft.com...[vbcol=seagreen]
>I understand why you would want it to be spread out, but I don't understand
> how it gets spread put. Does the app need to be written a certain way?
>
> "Hilary Cotter" wrote:
|||Thanks guys.
"ChrisR" wrote:

> BOL link:
> ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/rpldata9/html/23e7e8c1-002f-4e69-8c99-d63e4100de64.htm
> Howdy all. I was just reading up on this topic and have a question. It says
> on this page (Topology with two participating databases) that reads will be
> spread out between the two servers, thereby giving better performance. But it
> doesnt say how that is accomplished. Does the app have to be designed a
> specific way, like it would be if updates were going to go between the two
> boxes in the same scenario?
> TIA, ChrisR
>

Peer to Peer difficulty when adding a column

I'm trying to add a column to my peer to peer replication. According to the
help I've found:
Adding Columns
a.. To add a new column to a table and include that column in an existing
publication, execute ALTER TABLE <Table> ADD <Column>. By default, the
column is then replicated to all Subscribers. The column must allow NULL
values or include a default constraint.
b.. To add a new column to a table and not include that column in an
existing publication, disable the replication of schema changes, and then
execute ALTER TABLE <Table> ADD <Column>.
c.. To include an existing column in an existing publication, use
sp_articlecolumn (Transact-SQL), sp_mergearticlecolumn (Transact-SQL), or
the Publication Properties - <Publication> dialog box.
For more information, see How to: Define and Modify a Column Filter
(Replication Transact-SQL Programming) and How to: Define and Modify a
Column Filter (SQL Server Management Studio). This will require
subscriptions to be reinitialized.
d.. Adding an identity column to a published table is not supported,
because it can result in non-convergence when the column is replicated to
the Subscriber. The values in the identity column at the Publisher depend on
the order in which the rows for the affected table are physically stored.
The rows might be stored differently at the Subscriber; therefore the value
for the identity column can be different for the same rows.
However, while I am able to add the column to the publisher and it is
replicated to the subscriber, I cannot update this column at the subscriber
and have it replicate to the publisher. However when I update at the
publisher it IS pushed to the subscriber.
Outside of the new column, my updates bi-directionally work. Can someone
tell me how I can get updates on newly added columns to be bi-directional?
Thanks,
Adam P. Cassidy
Did you quiesce the system as outlined in this link?
http://msdn2.microsoft.com/en-us/library/ms147385.aspx
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Adam Patrick Cassidy" <apcassidy@.aol.com> wrote in message
news:eoBB5en4GHA.1200@.TK2MSFTNGP02.phx.gbl...
> I'm trying to add a column to my peer to peer replication. According to
> the help I've found:
> Adding Columns
> a.. To add a new column to a table and include that column in an existing
> publication, execute ALTER TABLE <Table> ADD <Column>. By default, the
> column is then replicated to all Subscribers. The column must allow NULL
> values or include a default constraint.
> b.. To add a new column to a table and not include that column in an
> existing publication, disable the replication of schema changes, and then
> execute ALTER TABLE <Table> ADD <Column>.
> c.. To include an existing column in an existing publication, use
> sp_articlecolumn (Transact-SQL), sp_mergearticlecolumn (Transact-SQL), or
> the Publication Properties - <Publication> dialog box.
> For more information, see How to: Define and Modify a Column Filter
> (Replication Transact-SQL Programming) and How to: Define and Modify a
> Column Filter (SQL Server Management Studio). This will require
> subscriptions to be reinitialized.
> d.. Adding an identity column to a published table is not supported,
> because it can result in non-convergence when the column is replicated to
> the Subscriber. The values in the identity column at the Publisher depend
> on the order in which the rows for the affected table are physically
> stored. The rows might be stored differently at the Subscriber; therefore
> the value for the identity column can be different for the same rows.
> However, while I am able to add the column to the publisher and it is
> replicated to the subscriber, I cannot update this column at the
> subscriber and have it replicate to the publisher. However when I update
> at the publisher it IS pushed to the subscriber.
> Outside of the new column, my updates bi-directionally work. Can someone
> tell me how I can get updates on newly added columns to be bi-directional?
> Thanks,
> Adam P. Cassidy
>
|||Yes. That is the same article I used as a reference originally.
Adam P. Cassidy
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:%234SYrOq4GHA.1188@.TK2MSFTNGP05.phx.gbl...
> Did you quiesce the system as outlined in this link?
> http://msdn2.microsoft.com/en-us/library/ms147385.aspx
> --
> Hilary Cotter
> Director of Text Mining and Database Strategy
> RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
> This posting is my own and doesn't necessarily represent RelevantNoise's
> positions, strategies or opinions.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "Adam Patrick Cassidy" <apcassidy@.aol.com> wrote in message
> news:eoBB5en4GHA.1200@.TK2MSFTNGP02.phx.gbl...
>

Peer to Peer 64bit and 32bit question

If I have an existing 64bit P2P transactional replication setup, can I join
a 32bit machine without any expected problems or does it also have to be a
64bit machine?
Any direction to resources would be great as well.
Adam P. Cassidy
Adam,
For SQL Server 2000, the issues were summarised as follows:
As the Microsoft Jet engine is not supported, Microsoft Jet push
subscriptions for merge replication are not supported.
Unless the subscriber provides a 64-bit ODBC or OLE DB driver, transactional
or snapshot push subscriptions for ODBC or OLE DB subscribers are not
supported.
Because of the unavailability of 64-bit Data Transformation Services (DTS),
transformable push subscriptions are not supported.
In Peer-to-peer between 64bit and 32bit I am therefore aware of no issues.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .
|||This is a tested configuration
"Adam Patrick Cassidy" <apcassidy@.aol.com> wrote in message
news:O%23gjeUCCHHA.4864@.TK2MSFTNGP04.phx.gbl...
> If I have an existing 64bit P2P transactional replication setup, can I
> join
> a 32bit machine without any expected problems or does it also have to be a
> 64bit machine?
> Any direction to resources would be great as well.
> Adam P. Cassidy
>
sql

Peduliar problem in SSRS

Hi all ,

I am encountering a peculiar problem while creating a linked report in SSRS.

The scenario:

My first reports contains 3 columns . The first 3 columns contain data from vendor table in AdventureWorks DB. They are VendorID , Name, CreditRating. The fourth column contains a link to the second report. It is just a link which jumps to the other report and passes the above three values as parameters to the second report.

The second report contains 4 parameters which includes VendorID , Name, CreditRating and one more Postedby parameter which has to be entered manually when the second report is displayed. The three parameters from first report are passed on and one extra parameter has to entered manually then only the second report displays.

It works fine when I check it on Visual Studio preview mode. I click the link for a particular row in First Report and it asks me to enter the Postedby Parameter. After entering it it displays the 2nd report. But the problem arises when I deploy it to report server. When I click the link on first report it throughs an error like " The Parameter 'PostedBy' is missing a value" whereas it should prompt the user to enter the PostedBy Parameter.

I am stuck in this and not able to understand why this happens in ReportServer and not in Visual Studio Preview mode.

Does anybody has any idea why this is happening ?

Any Suggestion will be appreciated.

Regards...

Girija Shankar Beuria

You must specify the default value for the fourth parameter. Visual Studio is an interactive environment, so you get to enter the value on request by the VS. But in production environment, reports are consumed by other applications and the report will not be able to request for an input from the calling application and this applies to Report Manager also which is an external web application that consumes the reporting services.

Shyam

|||

just go to the code and see if there is any parameter hard coded in xml.

Remove that parameter and save it .

This will probably solve your problem.

Vamsi Krishna Korasiga

Pedigree database

I want to write an online database that displays 4-generation pedigrees. Can anyone point me to a tutorial/web site on how to do this? I've written several online databases in asp, but can't figure out how to do this particular algorithm. Any help appreciated.

Basically I have a list of about 400,000 horses each with a unique ID code. In each individual horses record there is its mother and father identified by their ID codes. ie; three columns, the horse's ID, and it's mother and father's ID. So, when you display a horses four generation chart, it should all link uphorse, mother father, grandparents, great-grandparents etc.. It should be a common database task, but Ive searched everywhere on the web and in various books and cant find how to write and link up the SQL statements. Any help/pointers would be appreciated. cheersYour table design should be similar to the following:

Horse Id,Horse Detail,Fathers Id, Mothers Id

Using SELF JOIN (Relating the table to itself by aliasing) you can derive any number of elements in a hierarchy.

e.g. Data

Horse Id | Horse Detail | Father Id | Mother Id
1 | ABC | |
2 | DEF | |
3 | GHI |1 |2
4 | JKL |1 |2
5 | MNO | |
6 | PQR |1 |5

An addtional field Sex (F/M) may be useful for your application

Hope this helps.

Peculiar T-SQL requirement.

Hi all,

I have a scenario in which I need to do an amount balancing ( meaning, OFFSET few records with positive amount whose sum equals the value in the negative amount in the same recordset).

Consider the below example.

Col1 Col2 Amount OFFSet

A 01 100 0
A 01 20 0
A 01 30 0
A 01 25 0
A 01 -145 0
A 01 15 0


Above 6 records are fetched for a particular condition. My requirement now is to update the OFFSET column of the positive valued records whose sum equals the value in the negative amount (145) .

That is, I need the below result.


Col1 Col2 Amount OFFSet

A 01 100 1
A 01 20 1
A 01 30 0
A 01 25 1
A 01 -145 0
A 01 15 0

Only the records whose sum of amount matches the negative amount value should be OFFSet'd no matter in what sequence they are.

I guess I'm confusing a bit, let me know if you need more explanation.

Any help would be appreciated.

Thanks,

DBLearner


What you describe is a HUGE logic problem, not a SQL problem. That is very complicated to accomplish.

What do you do with multiple records match your criteria. For example:

Col1 Col2 Amount OFFSet

A 01 100 0
A 01 20 0
A 01 30 0
A 01 25 0
A 01 -145 0
A 01 15 0
A 01 25 0
A 01 100 0
A 01 25 0
A 01 25 0
A 01 25 0
A 01 15 0
A 01 10 0


What now?

-145 = 100+20+25
-145 = 25+25+25+25+20+25
-145 = 100+20+15+10
etc

Peculiar T-SQL requirement.

Hi all,

I have a scenario in which I need to do an amount balancing ( meaning, OFFSET few records with positive amount whose sum equals the value in the negative amount in the same recordset).

Consider the below example.

Col1 Col2 Amount OFFSet

A 01 100 0
A 01 20 0
A 01 30 0
A 01 25 0
A 01 -145 0
A 01 15 0


Above 6 records are fetched for a particular condition. My requirement now is to update the OFFSET column of the positive valued records whose sum equals the value in the negative amount (145) .

That is, I need the below result.


Col1 Col2 Amount OFFSet

A 01 100 1
A 01 20 1
A 01 30 0
A 01 25 1
A 01 -145 0
A 01 15 0

Only the records whose sum of amount matches the negative amount value should be OFFSet'd no matter in what sequence they are.

I guess I'm confusing a bit, let me know if you need more explanation.

Any help would be appreciated.

Thanks,

DBLearner


What you describe is a HUGE logic problem, not a SQL problem. That is very complicated to accomplish.

What do you do with multiple records match your criteria. For example:

Col1 Col2 Amount OFFSet

A 01 100 0
A 01 20 0
A 01 30 0
A 01 25 0
A 01 -145 0
A 01 15 0
A 01 25 0
A 01 100 0
A 01 25 0
A 01 25 0
A 01 25 0
A 01 15 0
A 01 10 0

What now?

-145 = 100+20+25
-145 = 25+25+25+25+20+25
-145 = 100+20+15+10
etc

sql

Peculiar Replication (Could be OT)

We have 2 types of Production Database. One is an OLTP database another contains all the Master Tables. We also have 2 copies of each and this repeats itself for each country where the plant is located. Is there someone who can explain to me how we might
setup this Replication so that the Master tables which are maintained in one location gets replicated over to the other places and also the updated from those locations are also propogated to all locations. We had MS come into the picture and they have cr
eated triggers such that the data is copied over but they seem to delete the record and insert it back again for an Update and sometimes inbetween the delete and insert the leased line goes down and the result is a total failure.
Regards,
Trevor Benedict R
Trevor,
this is typically implemented using merge replication. You'll need to consider partitioning of data for each subscriber, or the posibility of conflicts and how they should be resolved. There's a good set of explanations of general concepts in BOL.
The network failures you experience are usually solved in merge by simply restarting the merge agent.
HTH,
Paul Ibison

Peculiar problem with seemingly identical data

Hello,

I have this peculiar problem concerning MS SQL Server.

My company works with an mailing application (ASP) which uses SQL
Server as it's repository. What I want to do is send data directly
from my own application to this SQL Server in order to feed the
mailing application.

To test if this was possible I linked the tables from SQL Server in MS
Access and entered the data. This worked fine and the data was picked
up correctly by the mailing application.

The problem occurs when I send the data from my application (Java
application with JDBC connection). The data is in this case no longer
picked up by the application. The strange thing is that the data which
is entered through Access and the data from the Application look
identical in de database view. The problem also occurs when the data
is send with the tool winSQL and when I view the data in here it still
looks identical.

Even more strange is when I select the record which is not working in
Access and copy it into a new record (only changing the key) it
suddenly works!

Has anyone have an idea how this can be?

Thanks in advance,

Sander Janssen.Sander Janssen (sjanssen@.plex.nl) writes:
> My company works with an mailing application (ASP) which uses SQL
> Server as it's repository. What I want to do is send data directly
> from my own application to this SQL Server in order to feed the
> mailing application.
> To test if this was possible I linked the tables from SQL Server in MS
> Access and entered the data. This worked fine and the data was picked
> up correctly by the mailing application.
> The problem occurs when I send the data from my application (Java
> application with JDBC connection). The data is in this case no longer
> picked up by the application. The strange thing is that the data which
> is entered through Access and the data from the Application look
> identical in de database view. The problem also occurs when the data
> is send with the tool winSQL and when I view the data in here it still
> looks identical.
> Even more strange is when I select the record which is not working in
> Access and copy it into a new record (only changing the key) it
> suddenly works!

I don't have a clue. The thing is, I have no idea by which criterias
your application is picking up data. You need to find this out one
way or another. One way to do this is to use the Profiler to catch
which SQL it is using.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Which JDBC Driver and what version of SQL Server?

"Sander Janssen" <sjanssen@.plex.nl> wrote in message
news:e3b8d8ea.0307040443.5bc4a8d@.posting.google.co m...
> Hello,
> I have this peculiar problem concerning MS SQL Server.
> My company works with an mailing application (ASP) which uses SQL
> Server as it's repository. What I want to do is send data directly
> from my own application to this SQL Server in order to feed the
> mailing application.
> To test if this was possible I linked the tables from SQL Server in MS
> Access and entered the data. This worked fine and the data was picked
> up correctly by the mailing application.
> The problem occurs when I send the data from my application (Java
> application with JDBC connection). The data is in this case no longer
> picked up by the application. The strange thing is that the data which
> is entered through Access and the data from the Application look
> identical in de database view. The problem also occurs when the data
> is send with the tool winSQL and when I view the data in here it still
> looks identical.
> Even more strange is when I select the record which is not working in
> Access and copy it into a new record (only changing the key) it
> suddenly works!
> Has anyone have an idea how this can be?
> Thanks in advance,
> Sander Janssen.|||Hello,

I have found the source of the problem. It isn't as peculiar as I
thought at first though. The problem was that I wrote a date to an
identifier field and the characters ":" were the source of the problem
in the mailing application. When I replaced them with underscores it
all works fine!

Thanks for your help anyway!

Sander Janssen.

Peculiar problem with BCP data load

I have table in database with total column length about
225 bytes and have about 2.7 mil rows. When the data is
loaded using bcp some days it takes up more then 8 to 16
GB and other days it takes about 800 MB. Not consistent
as to how it behaves on any give days.
It's a row table with no index no primary key. I see from
statistics that on whole page only one to two rows are
stored and rest of the space is free. Any one has seen
this issue before.
DBCC SHOWCONTIG scanning 'PSN' table...
Table: 'PSN' (548197003); index ID: 0, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1003989
- Extents Scanned.......................: 126702
- Extent Switches.......................: 126701
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 99.05%
[125499:126702]
- Extent Scan Fragmentation ...............: 2.61%
- Avg. Bytes Free per Page................: 7441.4
- Avg. Page Density (full)................: 8.06%
DBCC execution completed. If DBCC printed error messages,
contact your system administrator.
There are many solutions to make it work. But just curious
as to what is causing this. Any pointers are greatly
appreciated.
Regards,
Don
Try creating a clustered index on the table, preferably one that is in order
of the data to some degree.
Andrew J. Kelly SQL MVP
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:0c2c01c4abba$f0313540$a401280a@.phx.gbl...
> I have table in database with total column length about
> 225 bytes and have about 2.7 mil rows. When the data is
> loaded using bcp some days it takes up more then 8 to 16
> GB and other days it takes about 800 MB. Not consistent
> as to how it behaves on any give days.
> It's a row table with no index no primary key. I see from
> statistics that on whole page only one to two rows are
> stored and rest of the space is free. Any one has seen
> this issue before.
> DBCC SHOWCONTIG scanning 'PSN' table...
> Table: 'PSN' (548197003); index ID: 0, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1003989
> - Extents Scanned.......................: 126702
> - Extent Switches.......................: 126701
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 99.05%
> [125499:126702]
> - Extent Scan Fragmentation ...............: 2.61%
> - Avg. Bytes Free per Page................: 7441.4
> - Avg. Page Density (full)................: 8.06%
> DBCC execution completed. If DBCC printed error messages,
> contact your system administrator.
> There are many solutions to make it work. But just curious
> as to what is causing this. Any pointers are greatly
> appreciated.
> Regards,
> Don
>

Peculiar problem with BCP data load

I have table in database with total column length about
225 bytes and have about 2.7 mil rows. When the data is
loaded using bcp some days it takes up more then 8 to 16
GB and other days it takes about 800 MB. Not consistent
as to how it behaves on any give days.
It's a row table with no index no primary key. I see from
statistics that on whole page only one to two rows are
stored and rest of the space is free. Any one has seen
this issue before.
DBCC SHOWCONTIG scanning 'PSN' table...
Table: 'PSN' (548197003); index ID: 0, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 1003989
- Extents Scanned.......................: 126702
- Extent Switches.......................: 126701
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 99.05%
[125499:126702]
- Extent Scan Fragmentation ...............: 2.61%
- Avg. Bytes Free per Page................: 7441.4
- Avg. Page Density (full)................: 8.06%
DBCC execution completed. If DBCC printed error messages,
contact your system administrator.
There are many solutions to make it work. But just curious
as to what is causing this. Any pointers are greatly
appreciated.
Regards,
DonTry creating a clustered index on the table, preferably one that is in order
of the data to some degree.
--
Andrew J. Kelly SQL MVP
"Don" <anonymous@.discussions.microsoft.com> wrote in message
news:0c2c01c4abba$f0313540$a401280a@.phx.gbl...
> I have table in database with total column length about
> 225 bytes and have about 2.7 mil rows. When the data is
> loaded using bcp some days it takes up more then 8 to 16
> GB and other days it takes about 800 MB. Not consistent
> as to how it behaves on any give days.
> It's a row table with no index no primary key. I see from
> statistics that on whole page only one to two rows are
> stored and rest of the space is free. Any one has seen
> this issue before.
> DBCC SHOWCONTIG scanning 'PSN' table...
> Table: 'PSN' (548197003); index ID: 0, database ID: 7
> TABLE level scan performed.
> - Pages Scanned........................: 1003989
> - Extents Scanned.......................: 126702
> - Extent Switches.......................: 126701
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 99.05%
> [125499:126702]
> - Extent Scan Fragmentation ...............: 2.61%
> - Avg. Bytes Free per Page................: 7441.4
> - Avg. Page Density (full)................: 8.06%
> DBCC execution completed. If DBCC printed error messages,
> contact your system administrator.
> There are many solutions to make it work. But just curious
> as to what is causing this. Any pointers are greatly
> appreciated.
> Regards,
> Don
>

Pease help - Feild value is truncated

I use a stored procedure to populate a table and it truncates the value to
one character for the field eMail.
you have an idea what is the problem?
i have to say also that I have tried this and it works:
INSERT VH_Data_NewsEmail (Id, eMail) VALUES (@.IdMaxPlus, `string`)
So it looks like there is a problem at the @.eMailPlus variable level.
TABLE IS:
Id int
eMail varchar(MAX)
STORED PROCEDURE IS:
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[sp_myInsertStoredProcedure] @.eMail varchar AS
DECLARE @.IdMaxPlus AS INT
DECLARE @.eMailPlus AS VARCHAR
BEGIN TRANSACTION
SET @.eMailPlus = 'Ring' VARCHAR
SELECT @.IdMaxPlus = coalesce(MAX(Id), 0) + 1 FROM VH_Data_NewsEmail
(UPDLOCK)
INSERT VH_Data_NewsEmail (Id, eMail) VALUES (@.IdMaxPlus, @.eMailPlus)
COMMIT TRANSACTIONok fixed it
DECLARE @.eMailPlus AS VARCHAR(MAX)
"Progman" <adfawefqw@.hotmail.com> wrote in message
news:1fmKf.2089$Jb7.681075@.weber.videotron.net...
>I use a stored procedure to populate a table and it truncates the value to
>one character for the field eMail.
> you have an idea what is the problem?
> i have to say also that I have tried this and it works:
> INSERT VH_Data_NewsEmail (Id, eMail) VALUES (@.IdMaxPlus, `string`)
> So it looks like there is a problem at the @.eMailPlus variable level.
> TABLE IS:
> Id int
> eMail varchar(MAX)
> STORED PROCEDURE IS:
> set ANSI_NULLS ON
> set QUOTED_IDENTIFIER ON
> GO
> CREATE PROCEDURE [dbo].[sp_myInsertStoredProcedure] @.eMail varchar AS
> DECLARE @.IdMaxPlus AS INT
> DECLARE @.eMailPlus AS VARCHAR
> BEGIN TRANSACTION
> SET @.eMailPlus = 'Ring' VARCHAR
> SELECT @.IdMaxPlus = coalesce(MAX(Id), 0) + 1 FROM VH_Data_NewsEmail
> (UPDLOCK)
> INSERT VH_Data_NewsEmail (Id, eMail) VALUES (@.IdMaxPlus, @.eMailPlus)
> COMMIT TRANSACTION
>

Peak During Time Period

Hi
I am hoping someone might be able to help me out with this.
I am writing a helpdesk system which records agents logging in and out
of the system.
I need to write a stored procedure which will show the peak number of
agents logged in concurrently during a specified time period. Within
the time period, the person viewing the report should be able to
specify intervals at which to see the data.
Eg. There is already a table in the system which holds logged
in/logged out data like
22/11/2004 14:02 - 22/11/2004 17:30
22/11/2004 09:00 - 22/11/2004 17:12
22/11/2004 10:25 - 22/11/2004 16:30
22/11/2004 11:02 - 22/11/2004 12:30
22/11/2004 16:00 - 22/11/2004 17:30
The report user can then say for example they want to view data
between 10th November and 12th November broken down into 15 minutes
intervals which would produce a table like this:
10/11/2004 00:00 - 10/11/2004 00:15
10/11/2004 00:15 - 10/11/2004 00:30
10/11/2004 00:30 - 10/11/2004 00:45
10/11/2004 00:45 - 10/11/2004 01:00 etc etc
Against each of these time slots, I need to work out the peak number
of concurrent agents logged in using the first table.
Can anyone make any suggestions? The time period the report user can
choose are either 15 mins, 30 mins, 45 mins, 1 hour and 1 day.
Thanks in advance
[posted and mailed, please reply in news]
Dave (dave@.court-technologies.com) writes:
> I need to write a stored procedure which will show the peak number of
> agents logged in concurrently during a specified time period. Within
> the time period, the person viewing the report should be able to
> specify intervals at which to see the data.
> Eg. There is already a table in the system which holds logged
> in/logged out data like
> 22/11/2004 14:02 - 22/11/2004 17:30
> 22/11/2004 09:00 - 22/11/2004 17:12
> 22/11/2004 10:25 - 22/11/2004 16:30
> 22/11/2004 11:02 - 22/11/2004 12:30
> 22/11/2004 16:00 - 22/11/2004 17:30
> The report user can then say for example they want to view data
> between 10th November and 12th November broken down into 15 minutes
> intervals which would produce a table like this:
> 10/11/2004 00:00 - 10/11/2004 00:15
> 10/11/2004 00:15 - 10/11/2004 00:30
> 10/11/2004 00:30 - 10/11/2004 00:45
> 10/11/2004 00:45 - 10/11/2004 01:00 etc etc
> Against each of these time slots, I need to work out the peak number
> of concurrent agents logged in using the first table.
The normal recommendation for this sort of post is to include:
o CREATE TABLE statements for the involved tables.
o INSERT statements with sample data.
o The desired output given the sample.
This makes it easy to post a tested solution, since the dirty work is
already set up, and it's only to cut and paste.
This time I did it for you, because the problem seemed interesting enough.
First I set up a table of numbers. This is a one-column table with numbers
1 to whatever the limit (80000 in this case, that's 55 days). The I packed
the actual query in a stored procedure to easily permit for parameters.
@.len is the length of the reporting interval in minutes.
The query has a number of nested derived tables. The innermost gives
the number of agents logged in at any given minute. The middle table,
normalizes the minute to the start of the reporting interval, and
the outermost, get the maximum count for each interval.
Further testing is recommended!
CREATE TABLE sessions (start datetime NOT NULL,
stop datetime NULL)
go
SET DATEFORMAT dmy
go
SELECT TOP 80000 n = identity(int, 1, 1)
INTO numbers
FROM Northwind..Orders a
CROSS JOIN Northwind..Orders b
go
INSERT sessions (start, stop)
SELECT '22/11/2004 14:02', '22/11/2004 17:30' UNION
SELECT '22/11/2004 09:00', '22/11/2004 17:12' UNION
SELECT '22/11/2004 10:25', '22/11/2004 16:30' UNION
SELECT '22/11/2004 11:02', '22/11/2004 12:30' UNION
SELECT '22/11/2004 16:00', '22/11/2004 17:30' UNION
SELECT '22/11/2004 16:00', '22/11/2004 16:05' UNION
SELECT '22/11/2004 16:06', '22/11/2004 16:10'
go
CREATE PROCEDURE get_peaks @.start datetime,
@.stop datetime,
@.len smallint AS
SELECT intstart, intstop = dateadd(mi, @.len, intstart), MAX(cnt)
FROM (SELECT intstart = dateadd(mi, @.len *
(datediff(mi, @.start, a.minute) / @.len), @.start),
a.cnt
FROM (SELECT mi.minute, cnt = COUNT(s.start)
FROM (SELECT minute = dateadd(mi, n, @.start)
FROM numbers
WHERE n <= datediff(mi, @.start, @.stop)) AS mi
LEFT JOIN sessions s
ON mi.minute BETWEEN s.start AND s.stop
GROUP BY mi.minute) AS a
) AS b
GROUP BY intstart
ORDER BY intstart
go
EXEC get_peaks '20041122 08:00', '20041122 18:00', 15
go
DROP TABLE numbers
DROP TABLE sessions
DROP PROCEDURE get_peaks
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
|||>> I am writing a helpdesk system which records agents logging in and
out of the system. <<
Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, datatypes, etc. in
your schema are. Sample data is also a good idea, along with clear
specifications. Does your boss, who is paying you, hide this
information and expect you to get your job done?
You might also want to learn that the only format for temporal data in
Standard SQL is ISO-8601 (yyyy-mm-dd hh:mm:ss.sss...) and start using
it; you can never tell, other systems just might follow iSO standards

[vbcol=seagreen]
of agents logged in concurrently during a specified time period. <<
Why not VIEWs? SQL is a non-procdural language after all. If you had
followed minimal netiquette, would this table lok liket his?
CREATE TABLE HelpDeskLogs
(agent_id CHAR(5) NOT NULL
REFERENCES Agents(agent_id)
ON UPDATE CASCADE
ON DELETE CASCADE,
start_time DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
finish_time DATETIME, -- null means still active
CHECK (start_time < finish_time),
PRIMARY KEY (agent_id, start_time));
[vbcol=seagreen]
between 10th November and 12th November broken down into 15 minutes
intervals which would produce a table like this: <<
Let's fill up a table of ranges:
CREATE TABLE ReportPeriods
(period_scale CHAR(2) DEFAULT '15' NOT NULL,
CHECK (period_scale IN ('15', '30', '45', 'HR', 'DY')
start_time DATETIME NOT NULL,
finish_time DATETIME NOT NULL, -- null means still active
CHECK (start_time < finish_time),
PRIMARY KEY (period_scale, start_time));
In standard SQL, we have a predicate for durations like this:
SELECT COUNT(agent_id) AS active_agents
FROM ReportPeriods AS P, HelpDeskLogs AS L
WHERE (P.start_time, P.finish_time)
OVERLAPS (L.start_time, L.finish_time);
That predicate gets translated into this:
(P.start_time > L.start_time
AND NOT (P.start_time >= L.finish_time
AND P.finish_time >= L.finish_time))
OR (L.start_time > P.start_time
AND NOT (L.start_time >= P.finish_time
AND L.finish_time >= P.finish_time))
OR (P.start_time = L.start_time
AND (P.finish_time <> L.finish_time
OR P.finish_time = L.finish_time))
Yes, it is a bit weird because it has to handle NULLs in the general
case.
You might also want to look up Rick Snodgrass at the University of
Arizona. he has a copy of his book on Temporal quereis in SQL in PDF
on his university website.
|||"Dave" <dave@.court-technologies.com> wrote in message
news:7bbc1b13.0411250202.57f40780@.posting.google.c om...
> Hi
> I am hoping someone might be able to help me out with this.
> I am writing a helpdesk system which records agents logging in and out
> of the system.
> I need to write a stored procedure which will show the peak number of
> agents logged in concurrently during a specified time period. Within
> the time period, the person viewing the report should be able to
> specify intervals at which to see the data.
> Eg. There is already a table in the system which holds logged
> in/logged out data like
> 22/11/2004 14:02 - 22/11/2004 17:30
> 22/11/2004 09:00 - 22/11/2004 17:12
> 22/11/2004 10:25 - 22/11/2004 16:30
> 22/11/2004 11:02 - 22/11/2004 12:30
> 22/11/2004 16:00 - 22/11/2004 17:30
> The report user can then say for example they want to view data
> between 10th November and 12th November broken down into 15 minutes
> intervals which would produce a table like this:
> 10/11/2004 00:00 - 10/11/2004 00:15
> 10/11/2004 00:15 - 10/11/2004 00:30
> 10/11/2004 00:30 - 10/11/2004 00:45
> 10/11/2004 00:45 - 10/11/2004 01:00 etc etc
> Against each of these time slots, I need to work out the peak number
> of concurrent agents logged in using the first table.
> Can anyone make any suggestions? The time period the report user can
> choose are either 15 mins, 30 mins, 45 mins, 1 hour and 1 day.
> Thanks in advance
CREATE TABLE LoginPeriods
(
agent_id VARCHAR(20) NOT NULL,
time_in DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
CHECK (time_in <= CURRENT_TIMESTAMP),
time_out DATETIME NOT NULL DEFAULT '99991231'
CHECK (time_out <= CURRENT_TIMESTAMP OR time_out = '99991231'),
PRIMARY KEY (time_in, agent_id),
CHECK (time_in < time_out)
)
-- Your sample data
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A1', '20041122 14:02', '20041122 17:30')
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A2', '20041122 09:00', '20041122 17:12')
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A3', '20041122 10:25', '20041122 16:30')
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A4', '20041122 11:02', '20041122 12:30')
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A5', '20041122 16:00', '20041122 17:30')
-- Digits 0-9
CREATE VIEW Digits (d)
AS
SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL
SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL
SELECT 9
-- Nonnegative integers to some suitable upper bound
-- Used in representing the sequence of time periods from
-- begin to end datetimes
CREATE VIEW NonnegativeIntegers (n)
AS
SELECT Ones.d + 10 * Tens.d
FROM Digits AS Ones
CROSS JOIN
Digits AS Tens
-- For each time period between begin and end datetimes,
-- return login periods that overlap
CREATE FUNCTION LoginPeriodsBetween
(@.begin_time DATETIME, @.end_time DATETIME, @.period_mins INT)
RETURNS TABLE
AS
RETURN(
SELECT DATEADD(MINUTE, I.n * @.period_mins, @.begin_time) AS begin_time,
DATEADD(MINUTE, (I.n+1) * @.period_mins, @.begin_time) AS end_time,
agent_id,
CASE WHEN time_in <=
DATEADD(MINUTE, I.n * @.period_mins, @.begin_time)
THEN DATEADD(MINUTE, I.n * @.period_mins, @.begin_time)
ELSE time_in
END AS time_in,
CASE WHEN time_out <=
DATEADD(MINUTE, (I.n+1) * @.period_mins, @.begin_time)
THEN time_out
ELSE DATEADD(MINUTE, (I.n+1) * @.period_mins, @.begin_time)
END AS time_out
FROM NonnegativeIntegers AS I
LEFT OUTER JOIN
LoginPeriods AS LP
ON time_out > DATEADD(MINUTE, I.n * @.period_mins, @.begin_time) AND
time_in < DATEADD(MINUTE, (I.n+1) * @.period_mins, @.begin_time)
WHERE I.n < DATEDIFF(MINUTE, @.begin_time, @.end_time) / @.period_mins
)
-- Maximum number of concurrent agent logins per time period
CREATE FUNCTION MaxConcurrentAgents
(@.begin_time DATETIME, @.end_time DATETIME, @.period_mins INT)
RETURNS TABLE
AS
RETURN(
SELECT begin_time, end_time, MAX(concurrent_agents) AS concurrent_agents_tally
FROM (SELECT LP1.begin_time, LP1.end_time,
LP1.agent_id,
LP1.time_in, LP1.time_out,
COUNT(LP2.agent_id) AS concurrent_agents
FROM LoginPeriodsBetween(@.begin_time, @.end_time, @.period_mins) AS LP1
LEFT OUTER JOIN
LoginPeriodsBetween(@.begin_time, @.end_time, @.period_mins) AS LP2
ON LP1.begin_time = LP2.begin_time AND
LP1.end_time = LP2.end_time AND
LP1.time_in >= LP2.time_in AND
LP1.time_in < LP2.time_out
GROUP BY LP1.begin_time, LP1.end_time, LP1.agent_id,
LP1.time_in, LP1.time_out) AS CA
GROUP BY begin_time, end_time
)
-- Maximum number of concurrent agent logins for each 30 minute
-- period between the specified begin and end datetimes
-- Note that no logins for a time period will be indicated by a 0 tally
SELECT begin_time, end_time, concurrent_agents_tally
FROM MaxConcurrentAgents('20041122 09:00', '20041122 18:00', 30)
ORDER BY begin_time
begin_time end_time concurrent_agents_tally
2004-11-22 09:00:00.000 2004-11-22 09:30:00.000 1
2004-11-22 09:30:00.000 2004-11-22 10:00:00.000 1
2004-11-22 10:00:00.000 2004-11-22 10:30:00.000 2
2004-11-22 10:30:00.000 2004-11-22 11:00:00.000 2
2004-11-22 11:00:00.000 2004-11-22 11:30:00.000 3
2004-11-22 11:30:00.000 2004-11-22 12:00:00.000 3
2004-11-22 12:00:00.000 2004-11-22 12:30:00.000 3
2004-11-22 12:30:00.000 2004-11-22 13:00:00.000 2
2004-11-22 13:00:00.000 2004-11-22 13:30:00.000 2
2004-11-22 13:30:00.000 2004-11-22 14:00:00.000 2
2004-11-22 14:00:00.000 2004-11-22 14:30:00.000 3
2004-11-22 14:30:00.000 2004-11-22 15:00:00.000 3
2004-11-22 15:00:00.000 2004-11-22 15:30:00.000 3
2004-11-22 15:30:00.000 2004-11-22 16:00:00.000 3
2004-11-22 16:00:00.000 2004-11-22 16:30:00.000 4
2004-11-22 16:30:00.000 2004-11-22 17:00:00.000 3
2004-11-22 17:00:00.000 2004-11-22 17:30:00.000 3
2004-11-22 17:30:00.000 2004-11-22 18:00:00.000 0
JAG

Peak During Time Period

Hi
I am hoping someone might be able to help me out with this.
I am writing a helpdesk system which records agents logging in and out
of the system.
I need to write a stored procedure which will show the peak number of
agents logged in concurrently during a specified time period. Within
the time period, the person viewing the report should be able to
specify intervals at which to see the data.
Eg. There is already a table in the system which holds logged
in/logged out data like
22/11/2004 14:02 - 22/11/2004 17:30
22/11/2004 09:00 - 22/11/2004 17:12
22/11/2004 10:25 - 22/11/2004 16:30
22/11/2004 11:02 - 22/11/2004 12:30
22/11/2004 16:00 - 22/11/2004 17:30
The report user can then say for example they want to view data
between 10th November and 12th November broken down into 15 minutes
intervals which would produce a table like this:
10/11/2004 00:00 - 10/11/2004 00:15
10/11/2004 00:15 - 10/11/2004 00:30
10/11/2004 00:30 - 10/11/2004 00:45
10/11/2004 00:45 - 10/11/2004 01:00 etc etc
Against each of these time slots, I need to work out the peak number
of concurrent agents logged in using the first table.
Can anyone make any suggestions? The time period the report user can
choose are either 15 mins, 30 mins, 45 mins, 1 hour and 1 day.
Thanks in advance[posted and mailed, please reply in news]
Dave (dave@.court-technologies.com) writes:
> I need to write a stored procedure which will show the peak number of
> agents logged in concurrently during a specified time period. Within
> the time period, the person viewing the report should be able to
> specify intervals at which to see the data.
> Eg. There is already a table in the system which holds logged
> in/logged out data like
> 22/11/2004 14:02 - 22/11/2004 17:30
> 22/11/2004 09:00 - 22/11/2004 17:12
> 22/11/2004 10:25 - 22/11/2004 16:30
> 22/11/2004 11:02 - 22/11/2004 12:30
> 22/11/2004 16:00 - 22/11/2004 17:30
> The report user can then say for example they want to view data
> between 10th November and 12th November broken down into 15 minutes
> intervals which would produce a table like this:
> 10/11/2004 00:00 - 10/11/2004 00:15
> 10/11/2004 00:15 - 10/11/2004 00:30
> 10/11/2004 00:30 - 10/11/2004 00:45
> 10/11/2004 00:45 - 10/11/2004 01:00 etc etc
> Against each of these time slots, I need to work out the peak number
> of concurrent agents logged in using the first table.
The normal recommendation for this sort of post is to include:
o CREATE TABLE statements for the involved tables.
o INSERT statements with sample data.
o The desired output given the sample.
This makes it easy to post a tested solution, since the dirty work is
already set up, and it's only to cut and paste.
This time I did it for you, because the problem seemed interesting enough.
First I set up a table of numbers. This is a one-column table with numbers
1 to whatever the limit (80000 in this case, that's 55 days). The I packed
the actual query in a stored procedure to easily permit for parameters.
@.len is the length of the reporting interval in minutes.
The query has a number of nested derived tables. The innermost gives
the number of agents logged in at any given minute. The middle table,
normalizes the minute to the start of the reporting interval, and
the outermost, get the maximum count for each interval.
Further testing is recommended!
CREATE TABLE sessions (start datetime NOT NULL,
stop datetime NULL)
go
SET DATEFORMAT dmy
go
SELECT TOP 80000 n = identity(int, 1, 1)
INTO numbers
FROM Northwind..Orders a
CROSS JOIN Northwind..Orders b
go
INSERT sessions (start, stop)
SELECT '22/11/2004 14:02', '22/11/2004 17:30' UNION
SELECT '22/11/2004 09:00', '22/11/2004 17:12' UNION
SELECT '22/11/2004 10:25', '22/11/2004 16:30' UNION
SELECT '22/11/2004 11:02', '22/11/2004 12:30' UNION
SELECT '22/11/2004 16:00', '22/11/2004 17:30' UNION
SELECT '22/11/2004 16:00', '22/11/2004 16:05' UNION
SELECT '22/11/2004 16:06', '22/11/2004 16:10'
go
CREATE PROCEDURE get_peaks @.start datetime,
@.stop datetime,
@.len smallint AS
SELECT intstart, intstop = dateadd(mi, @.len, intstart), MAX(cnt)
FROM (SELECT intstart = dateadd(mi, @.len *
(datediff(mi, @.start, a.minute) / @.len), @.start),
a.cnt
FROM (SELECT mi.minute, cnt = COUNT(s.start)
FROM (SELECT minute = dateadd(mi, n, @.start)
FROM numbers
WHERE n <= datediff(mi, @.start, @.stop)) AS mi
LEFT JOIN sessions s
ON mi.minute BETWEEN s.start AND s.stop
GROUP BY mi.minute) AS a
) AS b
GROUP BY intstart
ORDER BY intstart
go
EXEC get_peaks '20041122 08:00', '20041122 18:00', 15
go
DROP TABLE numbers
DROP TABLE sessions
DROP PROCEDURE get_peaks
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp|||>> I am writing a helpdesk system which records agents logging in and
out of the system. <<
Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, datatypes, etc. in
your schema are. Sample data is also a good idea, along with clear
specifications. Does your boss, who is paying you, hide this
information and expect you to get your job done?
You might also want to learn that the only format for temporal data in
Standard SQL is ISO-8601 (yyyy-mm-dd hh:mm:ss.sss...) and start using
it; you can never tell, other systems just might follow iSO standards
:)
>> I need to write a stored procedure which will show the peak number
of agents logged in concurrently during a specified time period. <<
Why not VIEWs? SQL is a non-procdural language after all. If you had
followed minimal netiquette, would this table lok liket his?
CREATE TABLE HelpDeskLogs
(agent_id CHAR(5) NOT NULL
REFERENCES Agents(agent_id)
ON UPDATE CASCADE
ON DELETE CASCADE,
start_time DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
finish_time DATETIME, -- null means still active
CHECK (start_time < finish_time),
PRIMARY KEY (agent_id, start_time));
>> The report user can then say for example they want to view data
between 10th November and 12th November broken down into 15 minutes
intervals which would produce a table like this: <<
Let's fill up a table of ranges:
CREATE TABLE ReportPeriods
(period_scale CHAR(2) DEFAULT '15' NOT NULL,
CHECK (period_scale IN ('15', '30', '45', 'HR', 'DY')
start_time DATETIME NOT NULL,
finish_time DATETIME NOT NULL, -- null means still active
CHECK (start_time < finish_time),
PRIMARY KEY (period_scale, start_time));
In standard SQL, we have a predicate for durations like this:
SELECT COUNT(agent_id) AS active_agents
FROM ReportPeriods AS P, HelpDeskLogs AS L
WHERE (P.start_time, P.finish_time)
OVERLAPS (L.start_time, L.finish_time);
That predicate gets translated into this:
(P.start_time > L.start_time
AND NOT (P.start_time >= L.finish_time
AND P.finish_time >= L.finish_time))
OR (L.start_time > P.start_time
AND NOT (L.start_time >= P.finish_time
AND L.finish_time >= P.finish_time))
OR (P.start_time = L.start_time
AND (P.finish_time <> L.finish_time
OR P.finish_time = L.finish_time))
Yes, it is a bit weird because it has to handle NULLs in the general
case.
You might also want to look up Rick Snodgrass at the University of
Arizona. he has a copy of his book on Temporal quereis in SQL in PDF
on his university website.|||"Dave" <dave@.court-technologies.com> wrote in message
news:7bbc1b13.0411250202.57f40780@.posting.google.com...
> Hi
> I am hoping someone might be able to help me out with this.
> I am writing a helpdesk system which records agents logging in and out
> of the system.
> I need to write a stored procedure which will show the peak number of
> agents logged in concurrently during a specified time period. Within
> the time period, the person viewing the report should be able to
> specify intervals at which to see the data.
> Eg. There is already a table in the system which holds logged
> in/logged out data like
> 22/11/2004 14:02 - 22/11/2004 17:30
> 22/11/2004 09:00 - 22/11/2004 17:12
> 22/11/2004 10:25 - 22/11/2004 16:30
> 22/11/2004 11:02 - 22/11/2004 12:30
> 22/11/2004 16:00 - 22/11/2004 17:30
> The report user can then say for example they want to view data
> between 10th November and 12th November broken down into 15 minutes
> intervals which would produce a table like this:
> 10/11/2004 00:00 - 10/11/2004 00:15
> 10/11/2004 00:15 - 10/11/2004 00:30
> 10/11/2004 00:30 - 10/11/2004 00:45
> 10/11/2004 00:45 - 10/11/2004 01:00 etc etc
> Against each of these time slots, I need to work out the peak number
> of concurrent agents logged in using the first table.
> Can anyone make any suggestions? The time period the report user can
> choose are either 15 mins, 30 mins, 45 mins, 1 hour and 1 day.
> Thanks in advance
CREATE TABLE LoginPeriods
(
agent_id VARCHAR(20) NOT NULL,
time_in DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
CHECK (time_in <= CURRENT_TIMESTAMP),
time_out DATETIME NOT NULL DEFAULT '99991231'
CHECK (time_out <= CURRENT_TIMESTAMP OR time_out = '99991231'),
PRIMARY KEY (time_in, agent_id),
CHECK (time_in < time_out)
)
-- Your sample data
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A1', '20041122 14:02', '20041122 17:30')
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A2', '20041122 09:00', '20041122 17:12')
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A3', '20041122 10:25', '20041122 16:30')
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A4', '20041122 11:02', '20041122 12:30')
INSERT INTO LoginPeriods (agent_id, time_in, time_out)
VALUES ('A5', '20041122 16:00', '20041122 17:30')
-- Digits 0-9
CREATE VIEW Digits (d)
AS
SELECT 0 UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL
SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL
SELECT 9
-- Nonnegative integers to some suitable upper bound
-- Used in representing the sequence of time periods from
-- begin to end datetimes
CREATE VIEW NonnegativeIntegers (n)
AS
SELECT Ones.d + 10 * Tens.d
FROM Digits AS Ones
CROSS JOIN
Digits AS Tens
-- For each time period between begin and end datetimes,
-- return login periods that overlap
CREATE FUNCTION LoginPeriodsBetween
(@.begin_time DATETIME, @.end_time DATETIME, @.period_mins INT)
RETURNS TABLE
AS
RETURN(
SELECT DATEADD(MINUTE, I.n * @.period_mins, @.begin_time) AS begin_time,
DATEADD(MINUTE, (I.n+1) * @.period_mins, @.begin_time) AS end_time,
agent_id,
CASE WHEN time_in <= DATEADD(MINUTE, I.n * @.period_mins, @.begin_time)
THEN DATEADD(MINUTE, I.n * @.period_mins, @.begin_time)
ELSE time_in
END AS time_in,
CASE WHEN time_out <= DATEADD(MINUTE, (I.n+1) * @.period_mins, @.begin_time)
THEN time_out
ELSE DATEADD(MINUTE, (I.n+1) * @.period_mins, @.begin_time)
END AS time_out
FROM NonnegativeIntegers AS I
LEFT OUTER JOIN
LoginPeriods AS LP
ON time_out > DATEADD(MINUTE, I.n * @.period_mins, @.begin_time) AND
time_in < DATEADD(MINUTE, (I.n+1) * @.period_mins, @.begin_time)
WHERE I.n < DATEDIFF(MINUTE, @.begin_time, @.end_time) / @.period_mins
)
-- Maximum number of concurrent agent logins per time period
CREATE FUNCTION MaxConcurrentAgents
(@.begin_time DATETIME, @.end_time DATETIME, @.period_mins INT)
RETURNS TABLE
AS
RETURN(
SELECT begin_time, end_time, MAX(concurrent_agents) AS concurrent_agents_tally
FROM (SELECT LP1.begin_time, LP1.end_time,
LP1.agent_id,
LP1.time_in, LP1.time_out,
COUNT(LP2.agent_id) AS concurrent_agents
FROM LoginPeriodsBetween(@.begin_time, @.end_time, @.period_mins) AS LP1
LEFT OUTER JOIN
LoginPeriodsBetween(@.begin_time, @.end_time, @.period_mins) AS LP2
ON LP1.begin_time = LP2.begin_time AND
LP1.end_time = LP2.end_time AND
LP1.time_in >= LP2.time_in AND
LP1.time_in < LP2.time_out
GROUP BY LP1.begin_time, LP1.end_time, LP1.agent_id,
LP1.time_in, LP1.time_out) AS CA
GROUP BY begin_time, end_time
)
-- Maximum number of concurrent agent logins for each 30 minute
-- period between the specified begin and end datetimes
-- Note that no logins for a time period will be indicated by a 0 tally
SELECT begin_time, end_time, concurrent_agents_tally
FROM MaxConcurrentAgents('20041122 09:00', '20041122 18:00', 30)
ORDER BY begin_time
begin_time end_time concurrent_agents_tally
2004-11-22 09:00:00.000 2004-11-22 09:30:00.000 1
2004-11-22 09:30:00.000 2004-11-22 10:00:00.000 1
2004-11-22 10:00:00.000 2004-11-22 10:30:00.000 2
2004-11-22 10:30:00.000 2004-11-22 11:00:00.000 2
2004-11-22 11:00:00.000 2004-11-22 11:30:00.000 3
2004-11-22 11:30:00.000 2004-11-22 12:00:00.000 3
2004-11-22 12:00:00.000 2004-11-22 12:30:00.000 3
2004-11-22 12:30:00.000 2004-11-22 13:00:00.000 2
2004-11-22 13:00:00.000 2004-11-22 13:30:00.000 2
2004-11-22 13:30:00.000 2004-11-22 14:00:00.000 2
2004-11-22 14:00:00.000 2004-11-22 14:30:00.000 3
2004-11-22 14:30:00.000 2004-11-22 15:00:00.000 3
2004-11-22 15:00:00.000 2004-11-22 15:30:00.000 3
2004-11-22 15:30:00.000 2004-11-22 16:00:00.000 3
2004-11-22 16:00:00.000 2004-11-22 16:30:00.000 4
2004-11-22 16:30:00.000 2004-11-22 17:00:00.000 3
2004-11-22 17:00:00.000 2004-11-22 17:30:00.000 3
2004-11-22 17:30:00.000 2004-11-22 18:00:00.000 0
--
JAGsql