There are no Conflicts or any error messages or deletes occurring as a
result of replication.
The problem i have is that changes made at the central publisher are
not not delivered via a republisher to the 3rd level subscriber. The
situation will occurr when changes are maded at the central publisher
but the filter on the Publishing subscriber prohibits the records from
being sent to the Regional Subscriber. This Is fine and the expected
behaviour as per the join filter. There are however a series of rows
added to the msMerge_GenHistory Table even though they are excluded by
the filter. Obviously at this point they have not been written to
msMerge_Contents. Next replication occurrs between the 3 rd level
subscriber and the 2nd level publishing subscriber. msMege_GenHistory
records are wiritten to the subscriber at this time.
Next the data at the top level Publisher changes such that the join
filter now includes some of the data previously created before
replication has last run. msMerge_Contents is updated on the regional
Publisher and the database is consistent with the top level publisher
at the completion of the replication process. Replication now runs
between lowest level subscriber and the republisher and only a portion
of the total subset of data is delivered to the subscriber. The
records affected are those records that were initially created prior to
the replication 1st running. IE. the missing records the ones that
only formed part of the filtred dataset as a result of some other data
changing.
Is it normal operation for msMerge_GenHistory to contain entries that
HAVE NOT been delivered to the subscriber. It seems that the this table
is determiner of whether the data is sent on to lower level
subscribers? My experiements have shown if i remove the entries from
the msMerge_GenHistory table that relate to the missing records at
least when the merge process fires all the data is correctly delivred
to the subscribers. I guess this has the effect of the merge agent
thinking it has never sent the data and therefore conversely when those
rows exist it beieves they have already been successfully delievred and
does not send them again.
So the problem is basically that old records that are created on the
top level publisher and that have been subject to replication but not
delivered because they dont satisfy the filter criteria will only be
delivered to the publishing subscriber when they do eventally satisfy
the filter conditions when some other data changes. They will not flow
throgh to the lowest level subscriber.
Could there be something that i am missing when setting up the
republisher. Ihave used a global pull subscription to the Central
Publisher and a local pull subscription exists between subscriber and
publishing subscriber
Hello Davec,
This issue seems to be complex and hard to troubleshoot in newsgroups. I
recommend that you open a Support incident with Microsoft Customer Support
Services (CSS) so that a dedicated Support Professional can work with you
in a more timely and efficient manner. If you need any help in this regard,
please let me know.
For a complete list of Microsoft Customer Support Services phone numbers,
please go to the following address on the World Wide Web:
<http://support.microsoft.com/directory/overview.asp>
If you are outside the US please see http://support.microsoft.com for
regional support phone numbers.
315642 INF: Information that PSS needs to troubleshoot SQL Server
replication
http://support.microsoft.com/?id=315642
The MSmerge_genhistory table contains one row for each generation that a
Subscriber knows about (within the retention period). It is used to avoid
sending common generations during exchanges and to resynchronize
Subscribers that are restored from backups. You may want to use the
following method to check the behavior of the tables
use <publication dabatbase>
select * from msmerge_contents
select * from msmerge_genhistory
use <publishing subscirber dabatbase>
select * from msmerge_contents
select * from msmerge_genhistory
Insert a new record on the replicated table. Run above script again to see
the results.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
--
>From: "Davec" <dcowie@.ballsolutions.com>
>Newsgroups: microsoft.public.sqlserver.replication
>Subject: Publishing Subscriber Incorrect delivery of data to subscribers
>Date: 2 Feb 2006 20:14:47 -0800
>Organization: http://groups.google.com
>Lines: 53
>Message-ID: <1138940087.640138.116560@.g43g2000cwa.googlegroups .com>
>NNTP-Posting-Host: 203.27.144.176
>Mime-Version: 1.0
>Content-Type: text/plain; charset="iso-8859-1"
>X-Trace: posting.google.com 1138940092 1069 127.0.0.1 (3 Feb 2006 04:14:52
GMT)
>X-Complaints-To: groups-abuse@.google.com
>NNTP-Posting-Date: Fri, 3 Feb 2006 04:14:52 +0000 (UTC)
>User-Agent: G2/0.2
>X-HTTP-UserAgent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;
.NET CLR 1.1.4322),gzip(gfe),gzip(gfe)
>Complaints-To: groups-abuse@.google.com
>Injection-Info: g43g2000cwa.googlegroups.com; posting-host=203.27.144.176;
> posting-account=D43jwQ0AAAD0zYIC2QWIq0mcLrbv0pX-
>Path:
TK2MSFTNGXA02.phx.gbl!TK2MSFTNGP08.phx.gbl!newsfee d00.sul.t-online.de!t-onli
ne.de!news.glorb.com!postnews.google.com!g43g2000c wa.googlegroups.com!not-fo
r-mail
>Xref: TK2MSFTNGXA02.phx.gbl microsoft.public.sqlserver.replication:69000
>X-Tomcat-NG: microsoft.public.sqlserver.replication
>Help! I have a hierarchical CentralPublisher >>> Publishing Subscriber
>There are no Conflicts or any error messages or deletes occurring as a
>result of replication.
>The problem i have is that changes made at the central publisher are
>not not delivered via a republisher to the 3rd level subscriber. The
>situation will occurr when changes are maded at the central publisher
>but the filter on the Publishing subscriber prohibits the records from
>being sent to the Regional Subscriber. This Is fine and the expected
>behaviour as per the join filter. There are however a series of rows
>added to the msMerge_GenHistory Table even though they are excluded by
>the filter. Obviously at this point they have not been written to
>msMerge_Contents. Next replication occurrs between the 3 rd level
>subscriber and the 2nd level publishing subscriber. msMege_GenHistory
>records are wiritten to the subscriber at this time.
>Next the data at the top level Publisher changes such that the join
>filter now includes some of the data previously created before
>replication has last run. msMerge_Contents is updated on the regional
>Publisher and the database is consistent with the top level publisher
>at the completion of the replication process. Replication now runs
>between lowest level subscriber and the republisher and only a portion
>of the total subset of data is delivered to the subscriber. The
>records affected are those records that were initially created prior to
>the replication 1st running. IE. the missing records the ones that
>only formed part of the filtred dataset as a result of some other data
>changing.
>Is it normal operation for msMerge_GenHistory to contain entries that
>HAVE NOT been delivered to the subscriber. It seems that the this table
>is determiner of whether the data is sent on to lower level
>subscribers? My experiements have shown if i remove the entries from
>the msMerge_GenHistory table that relate to the missing records at
>least when the merge process fires all the data is correctly delivred
>to the subscribers. I guess this has the effect of the merge agent
>thinking it has never sent the data and therefore conversely when those
>rows exist it beieves they have already been successfully delievred and
>does not send them again.
>So the problem is basically that old records that are created on the
>top level publisher and that have been subject to replication but not
>delivered because they dont satisfy the filter criteria will only be
>delivered to the publishing subscriber when they do eventally satisfy
>the filter conditions when some other data changes. They will not flow
>throgh to the lowest level subscriber.
>Could there be something that i am missing when setting up the
>republisher. Ihave used a global pull subscription to the Central
>Publisher and a local pull subscription exists between subscriber and
>publishing subscriber
>
|||Your situation is precisely what occurs and exactly what should happen. We
discussed this a few days ago and several people drew VERY wrong conclusions
from the discusssion.
When you filter a publication, the filter forms a hard barrier on the data
which is allowed to exist at the subscriber. This means that if a change at
the subscriber causes a row to move outside of the filter criteria, it will
be removed from the subscriber.
Now, this gets much more complex when you deal with filtering criteria which
can dynamically change. So, let's define something similar to what you
have:
(ServerA)Publisher
<--merge-->(ServerB)Subscriber/Publisher<--merge-->(ServerC)Subscriber
ServerA contains all of the data within the hierarchy
ServerB contains all data for TX, CA, WA, OR, and NV
ServerC contains all data for TX
You initialize everything. ServerA would contain data for everything.
ServerB would contain data for only TX, CA, WA, OR, and NV. ServerC would
contain only data for TX. (I'll leave out the case where ServerC was
configured for all data from a state that was not part of ServerB's data set
for simplicity

Now you make a bunch of changes all over the place to make things
interesting. When ServerA and ServerB synchronize, all ofthe changes from
ServerB will upload to ServerA, conflicts resolve, and then anything
corresponding to the filter for ServerB will be downloaded along with
metadata for MSmerge_genhistory corresponding to everything. When ServerC
synchronizes with ServerB, everything from TX is uploaded to ServerB,
conflicts resolved, and then anything resultant for TX is downloaded to
ServerC along with metadata for MSmerge_genhistory corresponding to
everything on ServerB. (This metadata exchange happens for a variety of
reasons as much as "by design" as "just the way it is coded". I'll leave
that explanation to the replication if they care to comment on it, although
I'd like to see a few changes to the way it behaves for the sake of saving
bandwidth.) This process continues for some amount of time.
Now Customer100 in NV moved to TX. So, you update there data to reflect the
new location. All of a sudden you have a new set of data that should appear
on ServerC since it now corresponds to that filter. Well, the only update
you did was to the Customer100 row. So, this is the only change which gets
logged to MSmerge_contents. During the next synchronization cycle, the
engine will essentially compare MSmerge_genhistory between B and C and
determine that this change had not been sent yet, so it gets sent down to C
(along with metadata entries). However, NONE of the data related to
Customer100 (employees, orders, credit records, etc.) will be sent to
ServerC. That is because the engine is not coded to recalculate the entire
filter during each synch cycle in order to determine if any data has now
moved into the filter and therefore perform essentially a micro-snapshot of
that data. (Sounds like a nice feature request to me, hint hint...) This
is a performance consideration, since it would require large amounts of
overhead to do this.
So, knowing a bit about the metadata internals, you would think that you
could simply blow away the entries in MSmerge_genhistory on ServerC that
corresponded to any rows for Customer100 and get what you are after. Not so
fast... MSmerge_genhistory governs what needs to be sent and what
received. However, MSmerge_contents governs the entire system. If it does
not exist in MSmerge_contents, it will NOT be sent, no matter what you try
to do. So, all of the data for Customer100 went from ServerA to ServerB
initially via the snapshot which initialized everything. Customer100 just
so happened to have a bunch of data entered for Order100, 107, 110, and 145.
There were some updates to Order110 and 145. That means MSmerge_contents
only contains entries for Order110 and 145 and they are updates only. So,
you blow away the entries on ServerC in MSmerge_genhistory corresponding to
Customer100's data. The engine kicks off and sees that it needs to transfer
the updates for Order110 and 145, because they don't exist on ServerC. It
then throws a bunch of errors, because it's pretty hard to replicate the
updates when the original row doesn't exist on ServerC. (The insert was
part of the snapshot, but not issued separately and so does not appear in
MSmerge_contents.) So, you are stuck.
This is only one case. The other, more likely case, is that the changes
were purged from the metadata during a maintenance cycle. That leaves no
possibility of the merge engine moving them, because they do not exist in
MSmerge_genhistory or MSmerge_contents.
That's a really simplified

need to do when you have data that migrates from one filter to another,
particularly when you are replicating across hierarchies with more than 2
levels is to reinitialize in order to get everything down. Is it a perfect
solution? No. But, this is NOT an easy problem to solve and what is here
is really just the tip of a really complex as well as really interesting
problem.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Davec" <dcowie@.ballsolutions.com> wrote in message
news:1138940087.640138.116560@.g43g2000cwa.googlegr oups.com...
> Help! I have a hierarchical CentralPublisher >>> Publishing Subscriber
> There are no Conflicts or any error messages or deletes occurring as a
> result of replication.
> The problem i have is that changes made at the central publisher are
> not not delivered via a republisher to the 3rd level subscriber. The
> situation will occurr when changes are maded at the central publisher
> but the filter on the Publishing subscriber prohibits the records from
> being sent to the Regional Subscriber. This Is fine and the expected
> behaviour as per the join filter. There are however a series of rows
> added to the msMerge_GenHistory Table even though they are excluded by
> the filter. Obviously at this point they have not been written to
> msMerge_Contents. Next replication occurrs between the 3 rd level
> subscriber and the 2nd level publishing subscriber. msMege_GenHistory
> records are wiritten to the subscriber at this time.
> Next the data at the top level Publisher changes such that the join
> filter now includes some of the data previously created before
> replication has last run. msMerge_Contents is updated on the regional
> Publisher and the database is consistent with the top level publisher
> at the completion of the replication process. Replication now runs
> between lowest level subscriber and the republisher and only a portion
> of the total subset of data is delivered to the subscriber. The
> records affected are those records that were initially created prior to
> the replication 1st running. IE. the missing records the ones that
> only formed part of the filtred dataset as a result of some other data
> changing.
> Is it normal operation for msMerge_GenHistory to contain entries that
> HAVE NOT been delivered to the subscriber. It seems that the this table
> is determiner of whether the data is sent on to lower level
> subscribers? My experiements have shown if i remove the entries from
> the msMerge_GenHistory table that relate to the missing records at
> least when the merge process fires all the data is correctly delivred
> to the subscribers. I guess this has the effect of the merge agent
> thinking it has never sent the data and therefore conversely when those
> rows exist it beieves they have already been successfully delievred and
> does not send them again.
> So the problem is basically that old records that are created on the
> top level publisher and that have been subject to replication but not
> delivered because they dont satisfy the filter criteria will only be
> delivered to the publishing subscriber when they do eventally satisfy
> the filter conditions when some other data changes. They will not flow
> throgh to the lowest level subscriber.
> Could there be something that i am missing when setting up the
> republisher. Ihave used a global pull subscription to the Central
> Publisher and a local pull subscription exists between subscriber and
> publishing subscriber
>
|||> So, you blow away the entries on ServerC in MSmerge_genhistory
> corresponding to Customer100's data. The engine kicks off and sees that
> it needs to transfer the updates for Order110 and 145, because they don't
> exist on ServerC. It then throws a bunch of errors, because it's pretty
> hard to replicate the updates when the original row doesn't exist on
> ServerC. (The insert was part of the snapshot, but not issued separately
> and so does not appear in MSmerge_contents.) So, you are stuck.
And if I'm following you correctly. If you didn't "blow away the entries on
ServerC in MSmerge_genhistory", then you'd finally receive Order110 and 145
for Customer100? Is there a way at the publisher to make the system think
there was an update on all of Customer100's orders, just to make sure they
get transferred on the next sync without forcing the subscriber to reinit or
perform some futuristic mini-snapshot? Or are the repication triggers to
clever to allow such a thing to happen?
--Troy
|||No, you wouldn't. That is because Order110 and 145 were updated ONLY.
There is nothing is MSmerge_contents corresponding to the insert. So, even
it you could force the updates to go through by some method, there wouldn't
be anything to update on the subscriber since the row wouldn't exist. No,
there is nothing in the engine to accomplish this. This is one of those
problems I could spend 50 pages explaining and we'd still not cover 1/2 of
the permutations which would have to be solved.
The merge engine logs changes into metadata tables for one reason,
performance. If it did not log the changes, then each synchronization cycle
would be required to do a row by row, column by column comparison between
the publisher and subscriber in order to determine changes. That obviously
incurs unacceptable overhead, so it is simply not done. So, you are left
with 2 very simple facts:
1. The merge engine will only send changes which get logged to
MSmerge_contents and are still there when the synch cycle executes
2. MSmerge_genhistory is used to determine what changes need to be exchanged
between publisher and subscriber as much as it is used to ensure that a
change is exchanged once and only once in order to prevent transactions from
looping multiple times which could destroy other changes.
So, plain and simply, merge replication will not pick up data and forward it
unless it is a change which exists within a generation which has not already
been exchanged between publisher and subscriber. And, there is no way to
reliably force it to do so. (I've been trying to find a method to do this
which works in all cases since Beta 2 of SQL Server 7.0 and I've been
working on this generic problem since the mid-90s without coming up with a
solution that has any chance of performing acceptably.)
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Troy Wolbrink" <wolbrink@.ccci.org> wrote in message
news:Ob0gON0KGHA.2064@.TK2MSFTNGP11.phx.gbl...
> And if I'm following you correctly. If you didn't "blow away the entries
> on ServerC in MSmerge_genhistory", then you'd finally receive Order110 and
> 145 for Customer100? Is there a way at the publisher to make the system
> think there was an update on all of Customer100's orders, just to make
> sure they get transferred on the next sync without forcing the subscriber
> to reinit or perform some futuristic mini-snapshot? Or are the repication
> triggers to clever to allow such a thing to happen?
> --Troy
>
|||OK well seems like i have a bit of a problem in my application if i
continue to use a hierachial merge topology.
I would have thought this issue would have been a reasonable common
situation and very limiting.
Certainly Reinitialisation is not a good solution for my situation for
a number of reasons. We unfortunately have a requirement to run merge
processes quite frequently so i can expect this situation to occurr
quite frequently as a result of my application business objects design.
I guess dummy updates to all related rows at the the time that a
change in assignment occurrs could be a messy workaround.
Thanks for the clarification of how it works. The problem has
certainly been interesting.
Davec
|||Actually i have one more question do you believe this behaviour is
restricted to republisher style topologies (Hierachial). Ie would i
expect the same behaviour in say a central publisher with one level of
subscribers. My tests have shown everything to work correctly in this
situation
No comments:
Post a Comment