Showing posts with label error. Show all posts
Showing posts with label error. Show all posts

Friday, March 30, 2012

Puzzled by installation error.

I have tried installed several times the SQL Server 2005 Express (Standard or Advanced Services) on my XP SP2 system and all failed.

The error messages are all about the Secure Socket Layer certificate and provider, either cannot find, create valid SSL certificate or the authentication provider is not trust (actually I don't know any of this and where to find them). I am learning at home the SQL Server and not going to do a website. Why I need a SSL? Or if it is really necessary, how do I obtained one?

I installed IIS (supposedly 5.0 since I have XP SP2), is this related? How to install without IIS?

Or do I need a Server OS?

Turn of the SSL on the default website (where Reporting Service will be installed) and you should be able to install the whole thing easily. Afterwards you can turn on SSL again on the default website.

HTH, Jens K. Suessmeyer.


http://www.sqlserver2005.de

|||

How to turn off SSL? In SQL Express setup or IIS setting in MMC?

My IIS setting has a default SSL port 440 but I cannot change it.

Puzzled by installation error.

I have tried installed several times the SQL Server 2005 Express (Standard or Advanced Services) on my XP SP2 system and all failed.

The error messages are all about the Secure Socket Layer certificate and provider, either cannot find, create valid SSL certificate or the authentication provider is not trust (actually I don't know any of this and where to find them). I am learning at home the SQL Server and not going to do a website. Why I need a SSL? Or if it is really necessary, how do I obtained one?

I installed IIS (supposedly 5.0 since I have XP SP2), is this related? How to install without IIS?

Or do I need a Server OS?

Turn of the SSL on the default website (where Reporting Service will be installed) and you should be able to install the whole thing easily. Afterwards you can turn on SSL again on the default website.

HTH, Jens K. Suessmeyer.


http://www.sqlserver2005.de

|||

How to turn off SSL? In SQL Express setup or IIS setting in MMC?

My IIS setting has a default SSL port 440 but I cannot change it.

Putting reusable code into SSIS components....

If I have standard code for logging, event handling, error handling, etc...How to do you get the standard SSIS components to include this code...

I would hate to have to add this standard code, each time I add a component to my package...especialy if I have to build hundreds of them...

Thx in advance...

RC

You have a few options. The 2 worth considering are:

1) Write a custom log provider
2) Build your custom logging/error handling into eventhandlers.

2) is, I would say, easier. This demo may help: http://blogs.conchango.com/jamiethomson/archive/2005/06/11/1593.aspx
Oh, and don't forget about the ability to create template packages. Your template can contain all the logging/error handling functionality that you need which means you only have to define it once. See here for more on templates: http://blogs.conchango.com/jamiethomson/archive/2005/11/12/2380.aspx

Trust me, you don't have to define all this stuff every time you add a component/task.

-Jamie|||Note that BOL has a topic "Creating a Custom Log Provider in Integration Services" that has enough code to get you going. However at RTM there was not a separate standalone custom log provider code sample.

-Doug
|||What I've done is put source code for event handlers into a source control system, and then copy & paste them into every event handler box on every component. I admit, this is a TERRIBLE solution, but I've not had time to develop a XSLT solution -- the only decent solution I can think of is some XSLT to automate this terrible copy & paste. Plus I've feared automatically changing dtsx files, as they've changed through the beta & CTP cycle, and they seem to undergo many changes even with very small changes by a human.

Has anyone else solved this, or written XSLT they'd care to share, to automate the copy & paste into many copies of event handlers?

Granted, it would be nice if SSIS supported reusability, but, it seems to lack reusability at other levels as well (especially Derived Column expressions are a real problem lacking reusability), so I don't know how likely it is that they'll address this any time soon.
|||

Perry_Rapp wrote:

What I've done is put source code for event handlers into a source control system, and then copy & paste them into every event handler box on every component. I admit, this is a TERRIBLE solution, but I've not had time to develop a XSLT solution -- the only decent solution I can think of is some XSLT to automate this terrible copy & paste. Plus I've feared automatically changing dtsx files, as they've changed through the beta & CTP cycle, and they seem to undergo many changes even with very small changes by a human.

Has anyone else solved this, or written XSLT they'd care to share, to automate the copy & paste into many copies of event handlers?

Granted, it would be nice if SSIS supported reusability, but, it seems to lack reusability at other levels as well (especially Derived Column expressions are a real problem lacking reusability), so I don't know how likely it is that they'll address this any time soon.

Reusability will be addressed in a big way in the next version I am quite sure of that!

If you are logging to a database table then you could write a sproc to do the logging and call that from wherever you need to.

What I do is have a "master" package in which all my logging is configured. Then from that I call the packages that actually do the work. I pass in an ID and the "master" package looks up in a metadata DB which package(s) it needs to call for that ID.

Also be aware that you can build template packages that contain all your logging. http://blogs.conchango.com/jamiethomson/archive/2005/11/12/2380.aspx

-Jamie

Putting reusable code into SSIS components....

If I have standard code for logging, event handling, error handling, etc...How to do you get the standard SSIS components to include this code...

I would hate to have to add this standard code, each time I add a component to my package...especialy if I have to build hundreds of them...

Thx in advance...

RC

You have a few options. The 2 worth considering are:

1) Write a custom log provider
2) Build your custom logging/error handling into eventhandlers.

2) is, I would say, easier. This demo may help: http://blogs.conchango.com/jamiethomson/archive/2005/06/11/1593.aspx
Oh, and don't forget about the ability to create template packages. Your template can contain all the logging/error handling functionality that you need which means you only have to define it once. See here for more on templates: http://blogs.conchango.com/jamiethomson/archive/2005/11/12/2380.aspx

Trust me, you don't have to define all this stuff every time you add a component/task.

-Jamie|||Note that BOL has a topic "Creating a Custom Log Provider in Integration Services" that has enough code to get you going. However at RTM there was not a separate standalone custom log provider code sample.

-Doug
|||What I've done is put source code for event handlers into a source control system, and then copy & paste them into every event handler box on every component. I admit, this is a TERRIBLE solution, but I've not had time to develop a XSLT solution -- the only decent solution I can think of is some XSLT to automate this terrible copy & paste. Plus I've feared automatically changing dtsx files, as they've changed through the beta & CTP cycle, and they seem to undergo many changes even with very small changes by a human.

Has anyone else solved this, or written XSLT they'd care to share, to automate the copy & paste into many copies of event handlers?

Granted, it would be nice if SSIS supported reusability, but, it seems to lack reusability at other levels as well (especially Derived Column expressions are a real problem lacking reusability), so I don't know how likely it is that they'll address this any time soon.|||

Perry_Rapp wrote:

What I've done is put source code for event handlers into a source control system, and then copy & paste them into every event handler box on every component. I admit, this is a TERRIBLE solution, but I've not had time to develop a XSLT solution -- the only decent solution I can think of is some XSLT to automate this terrible copy & paste. Plus I've feared automatically changing dtsx files, as they've changed through the beta & CTP cycle, and they seem to undergo many changes even with very small changes by a human.

Has anyone else solved this, or written XSLT they'd care to share, to automate the copy & paste into many copies of event handlers?

Granted, it would be nice if SSIS supported reusability, but, it seems to lack reusability at other levels as well (especially Derived Column expressions are a real problem lacking reusability), so I don't know how likely it is that they'll address this any time soon.

Reusability will be addressed in a big way in the next version I am quite sure of that!

If you are logging to a database table then you could write a sproc to do the logging and call that from wherever you need to.

What I do is have a "master" package in which all my logging is configured. Then from that I call the packages that actually do the work. I pass in an ID and the "master" package looks up in a metadata DB which package(s) it needs to call for that ID.

Also be aware that you can build template packages that contain all your logging. http://blogs.conchango.com/jamiethomson/archive/2005/11/12/2380.aspx

-Jamie

Putting 'NA' in a Money Datatype Column

Hello,
Can anyone solve this prob...
BACKGROUND:
I have a column in my query that is calculated from money datatypes. To
avoid the divide by zero error I created a nice inline case statement that
places something else in that field instead. It works great with any number
of my choice. For example I can put 100 everywhere that this calculated
field would've had a divide by zero error. HOWEVER, I want to put the word
'NA' in that field when this happens. But I can't because the datatype is
wrong:
ERROR:
Hence the error:Implicit conversion from data type char to money is not
allowed.
PROBLEM:
So is there a way to legally cast or convert so that I can put 'NA' in this
field instead of a number like zero or a hundred?
Here is my attempt so far (doesn't work):
CASE WHEN [expression] = 0 THEN CAST('NA' AS CHAR(2)) ELSE [other
expression]
Thanks.
--
Alex A.You will need to cast the entire field as varchar.
Another option would be to return NULL when the expression = 0 and have the
presentation tier (Crystal Reports, Excel, ASP.NET, etc.) display NULL as
NA.
"Alex A." <AlexA@.discussions.microsoft.com> wrote in message
news:12E0480D-DB52-4664-A2E2-F9D335FE5BB5@.microsoft.com...
> Hello,
> Can anyone solve this prob...
> BACKGROUND:
> I have a column in my query that is calculated from money datatypes. To
> avoid the divide by zero error I created a nice inline case statement that
> places something else in that field instead. It works great with any
> number
> of my choice. For example I can put 100 everywhere that this calculated
> field would've had a divide by zero error. HOWEVER, I want to put the
> word
> 'NA' in that field when this happens. But I can't because the datatype is
> wrong:
> ERROR:
> Hence the error:Implicit conversion from data type char to money is not
> allowed.
> PROBLEM:
> So is there a way to legally cast or convert so that I can put 'NA' in
> this
> field instead of a number like zero or a hundred?
> Here is my attempt so far (doesn't work):
> CASE WHEN [expression] = 0 THEN CAST('NA' AS CHAR(2)) ELSE [other
> expression]
> Thanks.
> --
> Alex A.
>|||WKidd,
Thanks for the post...
I can't cast the whole field because if it is not a divide by zero situation
I need the dollar amount to calculate... But I like the Null idea. Anyone
have the syntax in mind for that? Can I just put the word NULL after the
THEN statement?
"WKidd" wrote:

> You will need to cast the entire field as varchar.
> Another option would be to return NULL when the expression = 0 and have th
e
> presentation tier (Crystal Reports, Excel, ASP.NET, etc.) display NULL as
> NA.
> "Alex A." <AlexA@.discussions.microsoft.com> wrote in message
> news:12E0480D-DB52-4664-A2E2-F9D335FE5BB5@.microsoft.com...
>
>|||Look up hopw to use a NULL in SQL. But I would also get rid of the
MONEY datatype.
The MONEY datatype has rounding errors. Using more than one operation
(multiplication or division) on money columns will produce severe
rounding errors. A simple way to visualize money arithmetic is to place
a ROUND() function calls after every operation. For example,
Amount = (Portion / total_amt) * gross_amt
can be rewritten using money arithmetic as:
Amount = ROUND(ROUND(Portion/total_amt, 4) * gross_amt, 4)
Rounding to four decimal places might not seem an issue, until the
numbers you are using are greater than 10,000.
BEGIN
DECLARE @.gross_amt MONEY,
@.total_amt MONEY,
@.my_part MONEY,
@.money_result MONEY,
@.float_result FLOAT,
@.all_floats FLOAT;
SET @.gross_amt = 55294.72;
SET @.total_amt = 7328.75;
SET @.my_part = 1793.33;
SET @.money_result = (@.my_part / @.total_amt) * @.gross_amt;
SET @.float_result = (@.my_part / @.total_amt) * @.gross_amt;
SET @.Retult3 = (CAST(@.my_part AS FLOAT)
/ CAST( @.total_amt AS FLOAT))
* CAST(FLOAT, @.gross_amtAS FLOAT);
SELECT @.money_result, @.float_result, @.all_floats;
END;
@.money_result = 13525.09 -- incorrect
@.float_result = 13525.0885 -- incorrect
@.all_floats = 13530.5038673171 -- correct, with a -5.42 error|||For divisions, I usually write:
SELECT A / NULLIF(B, 0)
This will yield NULL if B is 0, because Anything / NULL yields NULL.
Regarding the precision of the money data type, you may want to do an
implicit (or an explicit) conversion to the decimal (or the float) data
type, like this:
DECLARE @.gross_amt MONEY,
@.total_amt MONEY,
@.my_part MONEY;
SET @.gross_amt = 55294.72;
SET @.total_amt = 7328.75;
SET @.my_part = 1793.33;
SELECT CAST((1. * @.my_part / @.total_amt) * @.gross_amt AS money)
The correct result would have been 13530.5038673170731707317073170(...)
but converted back to the money datatype it is 13530.5039
Razvan

putting error lines in new table or file

I'm importing large text files (12Gig). I know there are rows in the data
that will not parse correctly. What I'm trying to figure out is how to set
up the import and when there is an error parsing a row, that row either gets
saved to another table or text file and then continues to import the data.
This process would continue until the entire file is imported.Hi,
If you use some of the Tasks in the DTS packages you can configure "an
Error file".
If any rows are skipped they are placed in the the file. I'm pretty
sure that you can use this feature with the Text file import.
HTH
Barry

Wednesday, March 28, 2012

putting a field into the pageheader section - how?

Hi
Is it possible to place a field (typically "sum" or "first") into the
Header. Simply placing it there causes a compile error.
Thanks
MikePlace "sum" or "first" in any of the textbox(say textbox10) in body section
Then use =ReportItems!textbox10.value in pageheader section
Ponnurangam
"Mike" <mike.cornell@.interactivesa.co.za> wrote in message
news:#AieSwPsEHA.2536@.TK2MSFTNGP11.phx.gbl...
> Hi
> Is it possible to place a field (typically "sum" or "first") into the
> Header. Simply placing it there causes a compile error.
> Thanks
> Mike
>

Putting @Parameters in Stored Procedures

I am creating a stored Procedure and I am getting an error which relates toDATENAME.

SELECTCOUNT(*)AScalls,DATENAME(@.varDate, CALLSTARTTIME)AS 'Total Calls'

FROMCALL_LOG_MASTER

WHERE(COMMERCIALS='1')AND(CALLSTARTTIME >= @.StartDate)AND(CALLENDTIME <=@.EndDate

sql doesn't like: DATENAME( @.varDate, CallStartTime)
sql works fine if I change @.varDate into 'yy', 'mm', 'dd', or 'wk'
Since I do not want to make 5 unique Stored Proc just because of @.varDate....

Is there any way to work around this problem?



Well, you could do something like this:
DECLARE @.varDate VARCHAR(10)
SELECT @.varDate = 'mm'
SELECT
OrderDate,
CASE @.varDate
WHEN 'mm' THEN DATENAME(mm,OrderDate)
WHEN 'yy' THEN DATENAME(yy,OrderDate)
WHEN 'dw' THEN DATENAME(dw,OrderDate)
END -- added this as an EDIT to my post
FROM Orders

|||I tried,

ALTER PROCEDURE dbo.GET_TOTALCALLS
(
@.VarDate as char,
@.StartDate as char,
@.EndDate as char
)
AS
SELECT
COUNT(*) AS 'Calls',
CASE @.VarDate
When 'yy' Then DATEPART(yy, CALLSTARTTIME) AS 'Year'
FROM CALL_LOG_MASTER
WHERE (COMMERCIALS = '1') AND (CALLSTARTTIME IS NOT NULL) AND (CALLENDTIME IS NOT NULL)
GROUP BY DATEPART(yy, CALLSTARTTIME)
ORDER BY DATEPART(yy, CALLSTARTTIME)
SQL wouldn't allow me to save. What did I do wrong?
|||Somehow when I coped and pasted my test code I left out an END, sorry about that!
Try it like this:
SELECT
COUNT(*) AS 'Calls',
CASE @.VarDate
When 'yy' Then DATEPART(yy, CALLSTARTTIME)ENDAS 'Year'

FROM CALL_LOG_MASTER
WHERE (COMMERCIALS = '1') AND (CALLSTARTTIME IS NOT NULL) AND (CALLENDTIME IS NOT NULL)
GROUP BY DATEPART(yy, CALLSTARTTIME)
ORDER BY DATEPART(yy, CALLSTARTTIME)

Also, in the future, you need to be more specific than "SQL wouldn'tallow me to save". Exact error messages go a long way to gettingexact help.
|||thanks for the advice.
got another error:
ADO error: Column 'Call_LOG_MASTER.CALLSTARTTIME' is invalid in theselect list because it is not contained in either an aggregate functionor the GROUP BY clause.

SELECT
CASE @.varDate
when 'yy' Then DATEPART(yy, CALLSTARTTIME) END as 'Year',
COUNT(DATEPART(yy, CALLSTARTTIME)) AS 'Calls'
FROM CALL_LOG_MASTER
WHERE (COMMERCIALS = '1')AND (CALLSTARTTIME >= @.StartDate) AND (CALLENDTIME <= @.EndDate)
GROUP BY DATEPART(yy, CALLSTARTTIME)
ORDER BY DATEPART(yy, CALLSTARTTIME)
|||

I think I found the problem,

ALTER PROCEDUREdbo.GET_TOTALCALLS

(

@.varDateas varchar(255),

@.StartDateas datetime,

@.EndDateas datetime

)

AS

SELECT

CASE@.varDate

When'Year'Then DATEPART(yy, CALLSTARTTIME)

When'Month'Then DATEPART(mm, CALLSTARTTIME)

When'Week'Then DATEPART(wk, CALLSTARTTIME)

When'Day'Then DATEPART(dd, CALLSTARTTIME)

when'Quarter'Then Datepart(qq, CALLSTARTTIME)

END,

Count(*)as'Total Calls'

FROMCALL_LOG_MASTER

WHERE(COMMERCIALS = '1')AND(CALLSTARTTIME >= @.StartDate)AND(CALLENDTIME <= @.EndDate)

GROUP BY

CASE@.varDate

When'Year'Then DATEPART(yy, CALLSTARTTIME)

When'Month'Then DATEPART(mm, CALLSTARTTIME)

When'Week'Then DATEPART(wk, CALLSTARTTIME)

When'Day'Then DATEPART(dd, CALLSTARTTIME)

when'Quarter'Then Datepart(qq, CALLSTARTTIME)

End

ORDER BY

CASE@.varDate

When'Year'Then DATEPART(yy, CALLSTARTTIME)

When'Month'Then DATEPART(mm, CALLSTARTTIME)

When'Week'Then DATEPART(wk, CALLSTARTTIME)

When'Day'Then DATEPART(dd, CALLSTARTTIME)

when'Quarter'Then Datepart(qq, CALLSTARTTIME)

End

RETURN

However, the column can't be named this way. My new question is how do I renamed the datepart column by case?
Namely,
when @.varDate = 'Year', the column name should be 'Year',
when @.varDate = 'Month', the column name should be as 'Month', etc..

|||still stuck on this problem, could use some help if possible =)|||If that is your requirement, then I would scrap what you have and just do something like this:
ALTER PROCEDURE dbo.GET_TOTALCALLS
(
@.varDate as varchar (255),
@.StartDate as datetime,
@.EndDate as datetime
)
AS
IF @.varDate = 'Year'
BEGIN
SELECT
DATEPART(yy, CALLSTARTTIME) AS Year,
Count(*) as 'Total Calls'
FROM CALL_LOG_MASTER
WHERE (COMMERCIALS = '1')AND (CALLSTARTTIME >= @.StartDate) AND (CALLENDTIME <= @.EndDate)
GROUP BY DATEPART(yy, CALLSTARTTIME)
ORDER BY 1
END
IF @.varDate = 'Month'
BEGIN
SELECT
DATEPART(mm, CALLSTARTTIME) AS Month,
Count(*) as 'Total Calls'
FROM CALL_LOG_MASTER
WHERE (COMMERCIALS = '1')AND (CALLSTARTTIME >= @.StartDate) AND (CALLENDTIME <= @.EndDate)
GROUP BY DATEPART(mm, CALLSTARTTIME)
ORDER BY 1
END
-- etc
RETURN

|||yes, thank you. That will work fine

Monday, March 26, 2012

Push works / Pull doesn't

When I set up a pull subscription through the wizard, I get the following error when I try to run the distribution agent:
-->
The subscription to publication '(null)' has expired and does not exist. The step failed.
<--
If I set up a push subscription, everything works fine. Any ideas what would cause this issue?
Thanks in advance!
Greg,
there was an issue with this in SQL 7.0:
http://www.kbalertz.com/Feedback_290054.aspx.
Hopefully this is relevant to your case (;-))
Regards,
Paul
|||Here's more info:
I'm using sql server 2000, sp3 on win2kadv server on both publisher and subscriber. It's a Transaction replication publication, no merge or updating back to the publisher. I set up a very simple database with one table that has a primary key and a varchar
(50) column, and 3 rows to isolate the problem. The snapshot runs quick, but the pull distribution agent keeps getting the before mentioned error. If I set up a push subscription, it works.
Any ideas?
|||Greg,
as far as I can tell, this error is raised from
sp_MShelp_distribution_agentid, which is undocumented. The error message
relates to the independant_agent setting when you created the publication.
Please could you run profiler on the subscriber while you try the pull
subscription to check if this is the case. If so please post up the
arguments sent to the procedure and your script which creates the
publication and I'll see if I can get any further.
Regards,
Paul Ibison
|||I figured it out! I set up aliases rather than using ip addresses and it is now working.
http://support.microsoft.com/?id=321822

Push subscription failing

SQL Server Enterprise Manager encountered errors creating push subscriptions
for the following Subscribers:
JANUS: Error 208: Invalid object name 'sysextendedarticlesview'.
The error above shows up each time that I attempt to push the subscription.
THe view sysextendedarticlesview exist in the publisher database. DOes
anybody know how to fix this thing? THanks
issue this on the subscriber in the subscription database.
create view dbo.sysextendedarticlesview
as
select * from sysarticles
union all
select artid, NULL, creation_script, NULL, description,
dest_object, NULL, NULL, NULL, name, objid, pubid,
pre_creation_cmd, status, NULL, type, NULL,
schema_option, dest_owner from sysschemaarticles
go
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Pete Ocasio" <pete.ocasio@.execupay.com> wrote in message
news:OKRoJYKsEHA.2788@.TK2MSFTNGP09.phx.gbl...
> SQL Server Enterprise Manager encountered errors creating push
subscriptions
> for the following Subscribers:
> JANUS: Error 208: Invalid object name 'sysextendedarticlesview'.
> The error above shows up each time that I attempt to push the
subscription.
> THe view sysextendedarticlesview exist in the publisher database. DOes
> anybody know how to fix this thing? THanks
>
sql

push subscription error Xprepl.dll

I used a working publication as a template to create a publication.
Everything was good
I then opened the propeties and added a push subscribtion and
selected execute snapshot immediately.
The merge snapshot agent errored out.
A review of the session shows the following:
connection to publisher is good
connection to subscriber is good
connection to database good
execute <article>.sc good
fail: can not find DLL Xprepl.dll
Any ideas?
Is this in your binn directory? It is called xprepl.dll or xpcobrepl.dll
there.
I don't have a sql 2000 system here, but can you try this
sp_helpextendedproc and see if any of the procs reference it?
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"mj" <mj@.discussions.microsoft.com> wrote in message
news:9F4BBB11-3610-4E9C-B961-554BB322D938@.microsoft.com...
> I used a working publication as a template to create a publication.
> Everything was good
> I then opened the propeties and added a push subscribtion and
> selected execute snapshot immediately.
> The merge snapshot agent errored out.
> A review of the session shows the following:
> connection to publisher is good
> connection to subscriber is good
> connection to database good
> execute <article>.sc good
> fail: can not find DLL Xprepl.dll
>
> Any ideas?
>
|||I will verify this shortly.
2nd question:
does the subscribing server have to be enterprise, standard or does it
matter?
"Hilary Cotter" wrote:

> Is this in your binn directory? It is called xprepl.dll or xpcobrepl.dll
> there.
> I don't have a sql 2000 system here, but can you try this
> sp_helpextendedproc and see if any of the procs reference it?
>
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "mj" <mj@.discussions.microsoft.com> wrote in message
> news:9F4BBB11-3610-4E9C-B961-554BB322D938@.microsoft.com...
>
>
|||Latest update:
my subscriber is running sql server standard edition, service pack 3a
my publisher is running sql server enterprise edition, service pack 4
the subscriber has xprelp.dll in binn directory
and is going to run sp_helpextendedproc and get back.
I will update this link shortly.
Thanks
"mj" wrote:
[vbcol=seagreen]
> I will verify this shortly.
> 2nd question:
> does the subscribing server have to be enterprise, standard or does it
> matter?
>
>
> "Hilary Cotter" wrote:
|||This topology is supported. MS recommends running consistent service packs
between all servers of the same version.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"mj" <mj@.discussions.microsoft.com> wrote in message
news:0917B4D9-C17B-43E6-A481-620DAAB5F1FB@.microsoft.com...[vbcol=seagreen]
> Latest update:
> my subscriber is running sql server standard edition, service pack 3a
> my publisher is running sql server enterprise edition, service pack 4
> the subscriber has xprelp.dll in binn directory
> and is going to run sp_helpextendedproc and get back.
> I will update this link shortly.
> Thanks
>
> "mj" wrote:

Push Merge snapshot fails with Data is Null message.

Hi,

I'm converting a replication script from SQL 2000 to SQL 2005.

I am getting an error with push merge with no way to figure out what is wrong.

I've configured replication on a single XP server in SQL 2005 RTM version.

I have a push merge set up between A and B and between B and C. All 3 databases are 9.0 compatibility.

The snapshot and merge jobs for the A to B run fine with no errors, and merge replicates ok.

The snapshot for B to C fails with this message:
Message
2006-03-09 17:30:35.94
2006-03-09 17:30:35.94 -BcpBatchSize 100000
2006-03-09 17:30:35.94 -HistoryVerboseLevel 2
2006-03-09 17:30:35.94 -LoginTimeout 15
2006-03-09 17:30:35.94 -QueryTimeout 1800
2006-03-09 17:30:35.94
2006-03-09 17:30:35.95 Connecting to Publisher 'MyInstance'
2006-03-09 17:30:35.97 Publisher database compatibility level is set to 90.
2006-03-09 17:30:35.97 Retrieving publication and article information from the publisher database 'MyInstance.MyDB'
2006-03-09 17:30:36.22 [0%] The replication agent had encountered an exception.
2006-03-09 17:30:36.22 Source: Replication
2006-03-09 17:30:36.22 Exception Type: Microsoft.SqlServer.Replication.ReplicationAgentSqlException
2006-03-09 17:30:36.22 Exception Message: Data is Null. This method or property cannot be called on Null values.
2006-03-09 17:30:36.22 Message Code: 52006
2006-03-09 17:30:36.22

Love that exception message: "Data is Null" - very helpful to someone who is clairvoyant perhaps.
I checked the snapshot bcp files. The tables being merged all have data.

A sample add article command is:

exec sp_addmergearticle @.publication = N'MyMerge', @.article = N'Phone', @.processing_order = 4, @.source_owner = N'dbo', @.source_object = N'Phone', @.type = N'table', @.description = null, @.column_tracking = N'true', @.pre_creation_cmd = N'drop', @.creation_script = null, @.schema_option = 0x000000004C42CDDF, @.article_resolver = null, @.subset_filterclause = null, @.vertical_partition = N'false', @.destination_owner = N'dbo', @.verify_resolver_signature = 0, @.allow_interactive_resolver = N'false', @.fast_multicol_updateproc = N'true', @.check_permissions = 0, @.identityrangemanagementoption = N'none' ,@.force_invalidate_snapshot = 1,@.force_reinit_subscription = 1

If you have any ideas on how to fix this, I'd be most grateful. As it is after 6pm I probably won't read this again until morning. Thanks for any suggestions.

Can you get the full stack trace from MSrepl_errors in the distribution database and post it here please? Thanks.

-Raymond

|||

Here you go:

Message: Data is Null. This method or property cannot be called on Null values.
Command Text: sp_MSestimatemergesnapshotworkload
Parameters: @.publication = StandardMerge Stack:
at Microsoft.SqlServer.Replication.AgentCore.ExecuteWithOptionalResults(CommandSetupDelegate commandSetupDelegate, ProcessResultsDelegate processResultsDelegate, Int32 queryTimeout, CommandBehavior commandBehavior)
at Microsoft.SqlServer.Replication.AgentCore.ExecuteWithOptionalResults(CommandSetupDelegate commandSetupDelegate, ProcessResultsDelegate processResultsDelegate)
at Microsoft.SqlServer.Replication.Snapshot.SqlServerSnapshotProvider.InitializeProgressTracker()
at Microsoft.SqlServer.Replication.Snapshot.SqlServerSnapshotProvider.GenerateSnapshot()
at Microsoft.SqlServer.Replication.SnapshotGenerationAgent.InternalRun()
at Microsoft.SqlServer.Replication.AgentCore.Run()

Source: System.Data Target Site: Int32 get_Int32()
Message: Data is Null. This method or property cannot be called on Null values. Stack:
at System.Data.SqlClient.SqlBuffer.get_Int32()
at System.Data.SqlClient.SqlDataReader.GetInt32(Int32 i)
at Microsoft.SqlServer.Replication.Snapshot.ProgressTracker..ctor(SqlDataReader dataReader, SnapshotGenerationAgent snapshotGenerationAgent)
at Microsoft.SqlServer.Replication.Snapshot.SqlServerSnapshotProvider.CreateProgressTracker(SqlDataReader dataReader)
at Microsoft.SqlServer.Replication.AgentCore.ExecuteWithOptionalResults(CommandSetupDelegate commandSetupDelegate, ProcessResultsDelegate processResultsDelegate, Int32 queryTimeout, CommandBehavior commandBehavior)

|||

This looks like a bug, can you execute sp_MSestimatemergesnapshotworkload @.publication = 'StandardMerge' at the publisher database and post any rows with a null 'taskid' or 'taskload' value here. Thanks.

-Raymond

|||

Here they are. I've renamed the tables from their real names.

name taskid taskload Table 1 4 NULL Table 2 4 NULL Table 3 4 NULL Table 4 4 NULL Table 5 4 NULL Table 6 4 NULL Table 7 4 NULL Table 8 4 NULL Table 9 4 NULL Table 10 4 NULL Table 11 4 NULL Table 12 4 NULL Table 13 4 NULL

|||

By the way, for the same task id (4), for the A to B merge, these are the values (in case this is useful for comparison).

name taskid taskload Table 1 4 28 Table 2 4 41 Table 3 4 9 Table 4 4 2 Table 5 4 10 Table 6 4 20 Table 7 4 2 Table 8 4 5 Table 9 4 20 Table 10 4 37 Table 11 4 10 Table 12 4 27 Table 13 4 64

|||

Can you try the following query at the publisher database and see if it returns any rows?

select rowcnt from sysindexes where id = object_id(N'Table1') and indid in (0,1)

If the above query doesn't return any rows then this is basically the cause of the problem and it would be great if you can retry the query without the "and indid in (0,1)" part and report back whether there are any rows returned in that case. And if the publisher connection of the snapshot agent is not db_owner\sysadmin, you may want to temporarily change that and see if the problem goes away.

-Raymond

|||

The result from the original query is rowcnt = 195.

|||Also, this query and the prior queries were run using the login of the original script.|||

More info - in case it is useful

The parms passed for the merge publications:

For the AtoB merge (which works)
exec sp_addmergepublication @.publication = N'AtoB', @.description = N'Corporate Merge Replication', @.retention = 14, @.sync_mode = N'native', @.allow_push = N'true', @.allow_pull = N'true', @.allow_anonymous = N'false', @.publication_compatibility_level = '90RTM', @.enabled_for_internet = N'false', @.conflict_logging = N'publisher', @.snapshot_in_defaultfolder = N'true', @.pre_snapshot_script = @.PreScript, @.post_snapshot_script = @.PostScript, @.compress_snapshot = N'false', @.ftp_port = 21, @.ftp_login = N'anonymous', @.conflict_retention = 14, @.keep_partition_changes = N'false', @.allow_subscription_copy = N'false', @.allow_synctoalternate = N'false', @.add_to_active_directory = N'false', @.max_concurrent_merge = 0, @.max_concurrent_dynamic_snapshots = 0

exec sp_addpublication_snapshot @.publication = N'AtoB',@.frequency_type = 4, @.frequency_interval = 1, @.frequency_relative_interval = 1, @.frequency_recurrence_factor = 0, @.frequency_subday = 1, @.frequency_subday_interval = 5, @.active_start_date = 0, @.active_end_date = 0, @.active_start_time_of_day = 500, @.active_end_time_of_day = 235959, @.snapshot_job_name = @.MyName

exec sp_addmergesubscription @.publication = N'AtoB', @.subscriber = @.MyInstance, @.subscriber_db = N'SiteMaster', @.subscription_type = N'push', @.subscriber_type = N'global', @.subscription_priority = 75.0, @.sync_type = N'automatic', @.frequency_type = 4, @.frequency_interval = 1, @.frequency_relative_interval = 1, @.frequency_recurrence_factor = 0, @.frequency_subday = 8, @.frequency_subday_interval = 1, @.active_start_date = 0, @.active_end_date = 0, @.active_start_time_of_day = 0, @.active_end_time_of_day = 235959, @.enabled_for_syncmgr = N'false', @.offloadagent = 0, @.use_interactive_resolver = N'false', @.merge_job_name = @.Myname

For the BtoC merge (which doesn't)
exec sp_addmergepublication @.publication = N'BtoC', @.description = N'Facility Master Merge Replication', @.retention = 14, @.sync_mode = N'native', @.allow_push = N'true', @.allow_pull = N'true', @.allow_anonymous = N'false',@.publication_compatibility_level = '90RTM', @.enabled_for_internet = N'false', @.conflict_logging = N'publisher', @.snapshot_in_defaultfolder = N'true', @.pre_snapshot_script = @.PreScript, @.post_snapshot_script = @.PostScript, @.compress_snapshot = N'false', @.ftp_port = 21, @.ftp_login = N'anonymous', @.conflict_retention = 14, @.keep_partition_changes = N'false', @.allow_subscription_copy = N'false', @.allow_synctoalternate = N'false', @.add_to_active_directory = N'false', @.max_concurrent_merge = 0, @.max_concurrent_dynamic_snapshots = 0

exec sp_addpublication_snapshot @.publication = N'BtoC',@.frequency_type = 4, @.frequency_interval = 1, @.frequency_relative_interval = 1, @.frequency_recurrence_factor = 0, @.frequency_subday = 1, @.frequency_subday_interval = 5, @.active_start_date = 0, @.active_end_date = 0, @.active_start_time_of_day = 500, @.active_end_time_of_day = 235959, @.snapshot_job_name = @.Myname

exec sp_addmergesubscription @.publication = N'BtoC', @.subscriber = @.MyInstance, @.subscriber_db = @.MyFacility, @.subscription_type = N'push', @.subscriber_type = N'local', @.subscription_priority = 0.0, @.sync_type = N'automatic', @.frequency_type = 4, @.frequency_interval = 1, @.frequency_relative_interval = 1, @.frequency_recurrence_factor = 0, @.frequency_subday = 8, @.frequency_subday_interval = 1, @.active_start_date = 0, @.active_end_date = 0, @.active_start_time_of_day = 0, @.active_end_time_of_day = 235959, @.enabled_for_syncmgr = N'false', @.offloadagent = 0, @.use_interactive_resolver = N'false', @.merge_job_name = @.MyName

|||

This looks really weird, can you run sp_helpmergearticle @.publication =N'BtoC' at B and report back whether the source_object and source_owner columns are NULLs? Thanks much.

-Raymond

|||

Raymond,

I get nothing back from that at all. When I run it on the AtoB pub,run at A, the result looks normal. But nothing at all comes back for BtoC publication, run at B.

|||

This is interesting:

When I run EXEC dbo.sp_helpmergepublication at A, I get priority = 100.

When I run the same thing at B, I also get priority = 100.

You can see in my sp_addmergesubscription for A, I had set priority to 75.0 and that for B I'd set it to 0.0 so SQL is ignoring those parms I guess.

|||

Ok, looks like your publication\article metadata is very messed up at B. To confirm my suspicion, can you try the following query at B:

select name, pubid, objid, object_name(objid) from dbo.sysmergeextendedarticlesview

I am interested to know if any values in the last column are null or not. If there are any null values, then your article meta-data is seriously corrupted. You can, of course, recover from this particular problem by recreating the publication at B but given the kind of problems you have experienced in the past, I strongly suspect some other background operations at B is what is causing the problems in the first place. Here are a couple of possibilities that I can think of:

1) Someone tried to renamed the published table at B

2) Someone tried to drop and recreate the published table at B and use sp_MSunmarkreplino to allow the operation to occur.

3) The merge agent has a problem updating the object ids of published tables at B when a snapshot is delivered from A to B.

4) You have included sysmergearticles and\or sysmergepublications as part of the publication from A to B.

HTH

-Raymond

|||

I ran it (ordered by 2,1) and all the articles for B publication have NULL in the last column.

As for the possibilities, nobody but me has done anything to this data.

1) haven't done

2) I ran that once a week ago when I was first putting this stuff together and trying to clean stuff up with another problem I have with transactional replication getting a 3724 error ("Cannot drop the table 'dbo.Mytablename' because it is being used for replication.) Would that still be messing me up after many builds since then?

3) At the moment, I believe everything is running under the SQL Agent Svc login.

4) Those are not in my 13 explicitly named sp_addmergearticle articles.

Also, per priority difference mentioned earlier. I ran sp_helpmergesubscription and the priorities were as I expected. So why are the sp_helpmergepublication priorities 100?

Friday, March 23, 2012

push error - urgent

I'm able to pull the metatdata down from my sql server to my handheld for table, I can then add data to that table on the handheld, when I try to push it back to Sql Server table I pulled from, I'm getting the following error message:

The Push method returned one or more error rows. See the specified error table. [ Error table name = ErrorTable ]

what does this mean, and how can I push my table back to the table I pulled from? I have trackingOn set on the pull process.

I'm getting the same error. Can anyone help?|||

When the SqlCeRemoteDataAccess.Push method encounters a problem it puts an error message in a database table. In your case the table is called "ErrorTable." (You can specify a different table name in the Pull method if you care to.)

Use Query Analyzer on the device to look at this table. The problem will probably be obvious when you see the full error message. If not, post the error message(s) in the forum.

Query Analyzer is supposed to be installed on the device automatically by Visual Studio when you do a debug build, but it isn't always. If you don't have it already, you can install it manually as described here:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=314754&SiteID=1

The most common reasons for a push error are primary and foreign key constraint violations.

sql

push error - urgent

I'm able to pull the metatdata down from my sql server to my handheld for table, I can then add data to that table on the handheld, when I try to push it back to Sql Server table I pulled from, I'm getting the following error message:

The Push method returned one or more error rows. See the specified error table. [ Error table name = ErrorTable ]

what does this mean, and how can I push my table back to the table I pulled from? I have trackingOn set on the pull process.

I'm getting the same error. Can anyone help?|||

When the SqlCeRemoteDataAccess.Push method encounters a problem it puts an error message in a database table. In your case the table is called "ErrorTable." (You can specify a different table name in the Pull method if you care to.)

Use Query Analyzer on the device to look at this table. The problem will probably be obvious when you see the full error message. If not, post the error message(s) in the forum.

Query Analyzer is supposed to be installed on the device automatically by Visual Studio when you do a debug build, but it isn't always. If you don't have it already, you can install it manually as described here:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=314754&SiteID=1

The most common reasons for a push error are primary and foreign key constraint violations.

Monday, March 12, 2012

pull subscription, subscriber configuration

Hi,
I was trying a pull subscription and I got all kinds of error messages
all suggesting that the merge agent isn't being run or it isn't
accessible. But, the moment I did a push subscription, it ran and
replication happened without an error message. I don't understand why.
My subscriber is an ordinary windows 2000 OS and is not of a server
configuration. Could that be a reason, since, in a pull subscription,
the agent is supposed to run on the subscriber(which isn't a server)?
Any suggestions or thoughts are welcome!
Also, is there a way a subscriber can tell the publisher to push a
subscription?
Thanks,
PS.
By definition a pull subscriber "pulls" the subscription on its own schedule - it doesn't tell the publisher push now.
Its hard to figure out exactly what the error is from your description. Please provide the complete error messages you are getting.
In general when you have pull subscription problems.
1) check that the SQL Server agent account has rights to read the snapshot share and the underlying path on the publisher, likely to be \\publisherName\C$\Program Files\Microsoft SQL Server\MSSQL\ReplData\unc
2) check on your publisher that you have enabled your subscriber to impersonate the Server agent account on the Publisher. Go to Tools, point to replication, click configure publishers, subscribers, and distributors, click on the browse button to the righ
t of your subscriber and select the impersonate option.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"ps" wrote:

> Hi,
> I was trying a pull subscription and I got all kinds of error messages
> all suggesting that the merge agent isn't being run or it isn't
> accessible. But, the moment I did a push subscription, it ran and
> replication happened without an error message. I don't understand why.
> My subscriber is an ordinary windows 2000 OS and is not of a server
> configuration. Could that be a reason, since, in a pull subscription,
> the agent is supposed to run on the subscriber(which isn't a server)?
> Any suggestions or thoughts are welcome!
> Also, is there a way a subscriber can tell the publisher to push a
> subscription?
> Thanks,
> PS.
>

Pull subscription creation error

Upon creating a new Pull Subscription on a Subscriber, I kept on getting
the error message "SQL Server enterprise Manager could not create a pull
subscription to publication 'XXXX' Error 14234: The specified '@.server' is
invalid(valid values are returned by sp_helpserver). Where did I go wrong
and what is the problem? Thanks.
Georgi,
you could check that the server names haven't been altered:
Use Master
go
Select @.@.Servername
This should return your current server name but if it
returns NULL then try:
Use Master
go
Sp_DropServer 'OldName'
GO
Use Master
go
Sp_Addserver 'NewName', 'local'
GO
Stop and Start SQL Services
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Thank you. I checked but both publisher and subscriber have server names
other than NULL ...
Let me explain more my situation.
I want to make replication of database over internet, i want to use VPN. I
already setuped the VPN service so it gives IP-s that are not within the
range of the local network and are fixed for the server and for the
connections.
I was trying to make push subscription because it seemed to me that it's
easier to implement. So the subscription was successefuly added , the
snapshot agent generated snapshot but then this snapshot was not
automatically applied on Subscriber and the distributor agent cried out that
"objects are missing at subscriber". I don't know how to apply the
subscription manually, i tried to change it's path via Alternate
subscription location but it didn't helped. So i abandoned the push
subscription and now i'm trying to make pull subscription. I don't know if
i'm not missing someting of the security privilegies of the logins used, or
maybe must declare the servers in Remote Servers?
I'm quite confused with all this Replication stuff, i'm still new to it.
I've looked to the Replication section in SQL Server online but i can't find
solutions of this delicate problems (maybe it's written somewhere i'm still
reading ). Can you give me link(s) to some other comprehensive sources of
information, maybe good HOWTOS?
Thank you in advance
Georgi Peshterski
"Paul Ibison" <Paul.Ibison@.pygmalion.com> wrote in message
news:eRZf%23mUjFHA.3300@.TK2MSFTNGP15.phx.gbl...
> Georgi,
> you could check that the server names haven't been altered:
> Use Master
> go
> Select @.@.Servername
> This should return your current server name but if it
> returns NULL then try:
> Use Master
> go
> Sp_DropServer 'OldName'
> GO
> Use Master
> go
> Sp_Addserver 'NewName', 'local'
> GO
> Stop and Start SQL Services
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>

pull subscription - anynonomous - error HELP

I can't get an anynonmous pull subscription to work. Server is Windows
2000 with SQL Server 2000, client is XP Pro with MSDE 2000.
Setup replication on the server using the wizard:
Snapshot folder in default location
merge one database
don't select any subscribers (my client isn't listed)
Create publication using wizard
Merge
select one table
allow anonymous is marked
After the publication is created, go into the properties of the
publication:
Change the snapshot location to c:\inetpub\ftproot
set the FTP user/login (this login has admin rights)
Run the snapshot agent and the snapshot created successfully.
Now, on the client. the database tables exist, but there is no data.
Select Pull-Subscription to laptop / Pull New Subscription
Specify publication info
Connect using SQL Server authentication, enter login/password (has admin
and SA rights)
agent - use SQL server, same login/password
select database
Yes, initialize the schema and data
download the snapshot using FTP
on demand only
On the client, goto pull subscription under database in EM,
select start synchrnizing.
Server and Client receive error:
The schema script "\\iis-sql\C$\program file\microsoft sql
server\mssql\reldata\unc\iis-sql_msilex_msilex\20051215081206\ARCustomer
_1.sch could not be propaged to the subscriber.
Why do I get this message. I told it to use FTP so why/what is trying to
use the default repl directory.
I need to get this working like this, and then I will work on SQLDMO.
Thanks in advance.
Darin
*** Sent via Developersdex http://www.codecomments.com ***
Did you configure your pull subscription to download the snapshot using ftp?
It doesn't look like it.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Darin" <darin_nospam@.nospamever> wrote in message
news:%23vkgNpYAGHA.3268@.TK2MSFTNGP10.phx.gbl...
>I can't get an anynonmous pull subscription to work. Server is Windows
> 2000 with SQL Server 2000, client is XP Pro with MSDE 2000.
> Setup replication on the server using the wizard:
> Snapshot folder in default location
> merge one database
> don't select any subscribers (my client isn't listed)
> Create publication using wizard
> Merge
> select one table
> allow anonymous is marked
> After the publication is created, go into the properties of the
> publication:
> Change the snapshot location to c:\inetpub\ftproot
> set the FTP user/login (this login has admin rights)
> Run the snapshot agent and the snapshot created successfully.
> Now, on the client. the database tables exist, but there is no data.
> Select Pull-Subscription to laptop / Pull New Subscription
> Specify publication info
> Connect using SQL Server authentication, enter login/password (has admin
> and SA rights)
> agent - use SQL server, same login/password
> select database
> Yes, initialize the schema and data
> download the snapshot using FTP
> on demand only
> On the client, goto pull subscription under database in EM,
> select start synchrnizing.
> Server and Client receive error:
> The schema script "\\iis-sql\C$\program file\microsoft sql
> server\mssql\reldata\unc\iis-sql_msilex_msilex\20051215081206\ARCustomer
> _1.sch could not be propaged to the subscriber.
> Why do I get this message. I told it to use FTP so why/what is trying to
> use the default repl directory.
> I need to get this working like this, and then I will work on SQLDMO.
> Thanks in advance.
> Darin
> *** Sent via Developersdex http://www.codecomments.com ***
|||yes, I did. But, even if I re-do everything to NOT use FTP I get the
exact same error.
Did my steps on the wizards seem correct? Did I miss something that has
to be done outside the wizards?
Darin
*** Sent via Developersdex http://www.codecomments.com ***
|||I got it to work FINALLY.
I am not using FTP (I didn't want to anyway, just thought I would try).
The problem was the client server agent account was set to use local
system. I changed that to the same login/password as everything else
(had to create a new user and give admin rights to) and it successfully
replicated.
Now to work on getting it to work from SQLDMO.
Darin
*** Sent via Developersdex http://www.codecomments.com ***

Pull Subscription

I'm trying to pull new subscription from another server, but for some reason
my attemp fails. The error I get is Error 15004: Name cannot be NULL. Can
someone tell me how can I resolve this issue.
Thanks
what do you get when you issue a select @.@.servername on the publisher and
subscriber?
If you get (NULL) follow the instructions here for more information.
http://support.microsoft.com/defaul..kb;en-us;302223
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"BigRome" <BigRome@.discussions.microsoft.com> wrote in message
news:401552C3-6C3D-482F-94A8-CD57E8B164C3@.microsoft.com...
> I'm trying to pull new subscription from another server, but for some
reason
> my attemp fails. The error I get is Error 15004: Name cannot be NULL. Can
> someone tell me how can I resolve this issue.
> Thanks

Pull Replication Error 1069

I have a pull replication set up for users who synchronize data once in a
week. This replication is subscribed to a NT group with 5 users.
When a user is trying to synchronize the data, it is giving the error
message 1069 and the service did not start due to a logon failure.
Any help is greatly appreciated to resolve this issue.
David,
please have a look at Neil's explanation:
http://www.windowsitpro.com/Article/...376/14376.html
(replication runs as a series of jobs implemented by the sql server agent).
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

Friday, March 9, 2012

Publishing Subscriber Incorrect delivery of data to subscribers

Help! I have a hierarchical CentralPublisher >>> Publishing Subscriber[vbcol=seagreen]
There are no Conflicts or any error messages or deletes occurring as a
result of replication.
The problem i have is that changes made at the central publisher are
not not delivered via a republisher to the 3rd level subscriber. The
situation will occurr when changes are maded at the central publisher
but the filter on the Publishing subscriber prohibits the records from
being sent to the Regional Subscriber. This Is fine and the expected
behaviour as per the join filter. There are however a series of rows
added to the msMerge_GenHistory Table even though they are excluded by
the filter. Obviously at this point they have not been written to
msMerge_Contents. Next replication occurrs between the 3 rd level
subscriber and the 2nd level publishing subscriber. msMege_GenHistory
records are wiritten to the subscriber at this time.
Next the data at the top level Publisher changes such that the join
filter now includes some of the data previously created before
replication has last run. msMerge_Contents is updated on the regional
Publisher and the database is consistent with the top level publisher
at the completion of the replication process. Replication now runs
between lowest level subscriber and the republisher and only a portion
of the total subset of data is delivered to the subscriber. The
records affected are those records that were initially created prior to
the replication 1st running. IE. the missing records the ones that
only formed part of the filtred dataset as a result of some other data
changing.
Is it normal operation for msMerge_GenHistory to contain entries that
HAVE NOT been delivered to the subscriber. It seems that the this table
is determiner of whether the data is sent on to lower level
subscribers? My experiements have shown if i remove the entries from
the msMerge_GenHistory table that relate to the missing records at
least when the merge process fires all the data is correctly delivred
to the subscribers. I guess this has the effect of the merge agent
thinking it has never sent the data and therefore conversely when those
rows exist it beieves they have already been successfully delievred and
does not send them again.
So the problem is basically that old records that are created on the
top level publisher and that have been subject to replication but not
delivered because they dont satisfy the filter criteria will only be
delivered to the publishing subscriber when they do eventally satisfy
the filter conditions when some other data changes. They will not flow
throgh to the lowest level subscriber.
Could there be something that i am missing when setting up the
republisher. Ihave used a global pull subscription to the Central
Publisher and a local pull subscription exists between subscriber and
publishing subscriber
Hello Davec,
This issue seems to be complex and hard to troubleshoot in newsgroups. I
recommend that you open a Support incident with Microsoft Customer Support
Services (CSS) so that a dedicated Support Professional can work with you
in a more timely and efficient manner. If you need any help in this regard,
please let me know.
For a complete list of Microsoft Customer Support Services phone numbers,
please go to the following address on the World Wide Web:
<http://support.microsoft.com/directory/overview.asp>
If you are outside the US please see http://support.microsoft.com for
regional support phone numbers.
315642 INF: Information that PSS needs to troubleshoot SQL Server
replication
http://support.microsoft.com/?id=315642
The MSmerge_genhistory table contains one row for each generation that a
Subscriber knows about (within the retention period). It is used to avoid
sending common generations during exchanges and to resynchronize
Subscribers that are restored from backups. You may want to use the
following method to check the behavior of the tables
use <publication dabatbase>
select * from msmerge_contents
select * from msmerge_genhistory
use <publishing subscirber dabatbase>
select * from msmerge_contents
select * from msmerge_genhistory
Insert a new record on the replicated table. Run above script again to see
the results.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
--
>From: "Davec" <dcowie@.ballsolutions.com>
>Newsgroups: microsoft.public.sqlserver.replication
>Subject: Publishing Subscriber Incorrect delivery of data to subscribers
>Date: 2 Feb 2006 20:14:47 -0800
>Organization: http://groups.google.com
>Lines: 53
>Message-ID: <1138940087.640138.116560@.g43g2000cwa.googlegroups .com>
>NNTP-Posting-Host: 203.27.144.176
>Mime-Version: 1.0
>Content-Type: text/plain; charset="iso-8859-1"
>X-Trace: posting.google.com 1138940092 1069 127.0.0.1 (3 Feb 2006 04:14:52
GMT)
>X-Complaints-To: groups-abuse@.google.com
>NNTP-Posting-Date: Fri, 3 Feb 2006 04:14:52 +0000 (UTC)
>User-Agent: G2/0.2
>X-HTTP-UserAgent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;
.NET CLR 1.1.4322),gzip(gfe),gzip(gfe)
>Complaints-To: groups-abuse@.google.com
>Injection-Info: g43g2000cwa.googlegroups.com; posting-host=203.27.144.176;
> posting-account=D43jwQ0AAAD0zYIC2QWIq0mcLrbv0pX-
>Path:
TK2MSFTNGXA02.phx.gbl!TK2MSFTNGP08.phx.gbl!newsfee d00.sul.t-online.de!t-onli
ne.de!news.glorb.com!postnews.google.com!g43g2000c wa.googlegroups.com!not-fo
r-mail
>Xref: TK2MSFTNGXA02.phx.gbl microsoft.public.sqlserver.replication:69000
>X-Tomcat-NG: microsoft.public.sqlserver.replication
>Help! I have a hierarchical CentralPublisher >>> Publishing Subscriber
>There are no Conflicts or any error messages or deletes occurring as a
>result of replication.
>The problem i have is that changes made at the central publisher are
>not not delivered via a republisher to the 3rd level subscriber. The
>situation will occurr when changes are maded at the central publisher
>but the filter on the Publishing subscriber prohibits the records from
>being sent to the Regional Subscriber. This Is fine and the expected
>behaviour as per the join filter. There are however a series of rows
>added to the msMerge_GenHistory Table even though they are excluded by
>the filter. Obviously at this point they have not been written to
>msMerge_Contents. Next replication occurrs between the 3 rd level
>subscriber and the 2nd level publishing subscriber. msMege_GenHistory
>records are wiritten to the subscriber at this time.
>Next the data at the top level Publisher changes such that the join
>filter now includes some of the data previously created before
>replication has last run. msMerge_Contents is updated on the regional
>Publisher and the database is consistent with the top level publisher
>at the completion of the replication process. Replication now runs
>between lowest level subscriber and the republisher and only a portion
>of the total subset of data is delivered to the subscriber. The
>records affected are those records that were initially created prior to
>the replication 1st running. IE. the missing records the ones that
>only formed part of the filtred dataset as a result of some other data
>changing.
>Is it normal operation for msMerge_GenHistory to contain entries that
>HAVE NOT been delivered to the subscriber. It seems that the this table
>is determiner of whether the data is sent on to lower level
>subscribers? My experiements have shown if i remove the entries from
>the msMerge_GenHistory table that relate to the missing records at
>least when the merge process fires all the data is correctly delivred
>to the subscribers. I guess this has the effect of the merge agent
>thinking it has never sent the data and therefore conversely when those
>rows exist it beieves they have already been successfully delievred and
>does not send them again.
>So the problem is basically that old records that are created on the
>top level publisher and that have been subject to replication but not
>delivered because they dont satisfy the filter criteria will only be
>delivered to the publishing subscriber when they do eventally satisfy
>the filter conditions when some other data changes. They will not flow
>throgh to the lowest level subscriber.
>Could there be something that i am missing when setting up the
>republisher. Ihave used a global pull subscription to the Central
>Publisher and a local pull subscription exists between subscriber and
>publishing subscriber
>
|||Your situation is precisely what occurs and exactly what should happen. We
discussed this a few days ago and several people drew VERY wrong conclusions
from the discusssion.
When you filter a publication, the filter forms a hard barrier on the data
which is allowed to exist at the subscriber. This means that if a change at
the subscriber causes a row to move outside of the filter criteria, it will
be removed from the subscriber.
Now, this gets much more complex when you deal with filtering criteria which
can dynamically change. So, let's define something similar to what you
have:
(ServerA)Publisher
<--merge-->(ServerB)Subscriber/Publisher<--merge-->(ServerC)Subscriber
ServerA contains all of the data within the hierarchy
ServerB contains all data for TX, CA, WA, OR, and NV
ServerC contains all data for TX
You initialize everything. ServerA would contain data for everything.
ServerB would contain data for only TX, CA, WA, OR, and NV. ServerC would
contain only data for TX. (I'll leave out the case where ServerC was
configured for all data from a state that was not part of ServerB's data set
for simplicity .)
Now you make a bunch of changes all over the place to make things
interesting. When ServerA and ServerB synchronize, all ofthe changes from
ServerB will upload to ServerA, conflicts resolve, and then anything
corresponding to the filter for ServerB will be downloaded along with
metadata for MSmerge_genhistory corresponding to everything. When ServerC
synchronizes with ServerB, everything from TX is uploaded to ServerB,
conflicts resolved, and then anything resultant for TX is downloaded to
ServerC along with metadata for MSmerge_genhistory corresponding to
everything on ServerB. (This metadata exchange happens for a variety of
reasons as much as "by design" as "just the way it is coded". I'll leave
that explanation to the replication if they care to comment on it, although
I'd like to see a few changes to the way it behaves for the sake of saving
bandwidth.) This process continues for some amount of time.
Now Customer100 in NV moved to TX. So, you update there data to reflect the
new location. All of a sudden you have a new set of data that should appear
on ServerC since it now corresponds to that filter. Well, the only update
you did was to the Customer100 row. So, this is the only change which gets
logged to MSmerge_contents. During the next synchronization cycle, the
engine will essentially compare MSmerge_genhistory between B and C and
determine that this change had not been sent yet, so it gets sent down to C
(along with metadata entries). However, NONE of the data related to
Customer100 (employees, orders, credit records, etc.) will be sent to
ServerC. That is because the engine is not coded to recalculate the entire
filter during each synch cycle in order to determine if any data has now
moved into the filter and therefore perform essentially a micro-snapshot of
that data. (Sounds like a nice feature request to me, hint hint...) This
is a performance consideration, since it would require large amounts of
overhead to do this.
So, knowing a bit about the metadata internals, you would think that you
could simply blow away the entries in MSmerge_genhistory on ServerC that
corresponded to any rows for Customer100 and get what you are after. Not so
fast... MSmerge_genhistory governs what needs to be sent and what
received. However, MSmerge_contents governs the entire system. If it does
not exist in MSmerge_contents, it will NOT be sent, no matter what you try
to do. So, all of the data for Customer100 went from ServerA to ServerB
initially via the snapshot which initialized everything. Customer100 just
so happened to have a bunch of data entered for Order100, 107, 110, and 145.
There were some updates to Order110 and 145. That means MSmerge_contents
only contains entries for Order110 and 145 and they are updates only. So,
you blow away the entries on ServerC in MSmerge_genhistory corresponding to
Customer100's data. The engine kicks off and sees that it needs to transfer
the updates for Order110 and 145, because they don't exist on ServerC. It
then throws a bunch of errors, because it's pretty hard to replicate the
updates when the original row doesn't exist on ServerC. (The insert was
part of the snapshot, but not issued separately and so does not appear in
MSmerge_contents.) So, you are stuck.
This is only one case. The other, more likely case, is that the changes
were purged from the metadata during a maintenance cycle. That leaves no
possibility of the merge engine moving them, because they do not exist in
MSmerge_genhistory or MSmerge_contents.
That's a really simplified explanation of what is going on. What you
need to do when you have data that migrates from one filter to another,
particularly when you are replicating across hierarchies with more than 2
levels is to reinitialize in order to get everything down. Is it a perfect
solution? No. But, this is NOT an easy problem to solve and what is here
is really just the tip of a really complex as well as really interesting
problem.
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Davec" <dcowie@.ballsolutions.com> wrote in message
news:1138940087.640138.116560@.g43g2000cwa.googlegr oups.com...
> Help! I have a hierarchical CentralPublisher >>> Publishing Subscriber
> There are no Conflicts or any error messages or deletes occurring as a
> result of replication.
> The problem i have is that changes made at the central publisher are
> not not delivered via a republisher to the 3rd level subscriber. The
> situation will occurr when changes are maded at the central publisher
> but the filter on the Publishing subscriber prohibits the records from
> being sent to the Regional Subscriber. This Is fine and the expected
> behaviour as per the join filter. There are however a series of rows
> added to the msMerge_GenHistory Table even though they are excluded by
> the filter. Obviously at this point they have not been written to
> msMerge_Contents. Next replication occurrs between the 3 rd level
> subscriber and the 2nd level publishing subscriber. msMege_GenHistory
> records are wiritten to the subscriber at this time.
> Next the data at the top level Publisher changes such that the join
> filter now includes some of the data previously created before
> replication has last run. msMerge_Contents is updated on the regional
> Publisher and the database is consistent with the top level publisher
> at the completion of the replication process. Replication now runs
> between lowest level subscriber and the republisher and only a portion
> of the total subset of data is delivered to the subscriber. The
> records affected are those records that were initially created prior to
> the replication 1st running. IE. the missing records the ones that
> only formed part of the filtred dataset as a result of some other data
> changing.
> Is it normal operation for msMerge_GenHistory to contain entries that
> HAVE NOT been delivered to the subscriber. It seems that the this table
> is determiner of whether the data is sent on to lower level
> subscribers? My experiements have shown if i remove the entries from
> the msMerge_GenHistory table that relate to the missing records at
> least when the merge process fires all the data is correctly delivred
> to the subscribers. I guess this has the effect of the merge agent
> thinking it has never sent the data and therefore conversely when those
> rows exist it beieves they have already been successfully delievred and
> does not send them again.
> So the problem is basically that old records that are created on the
> top level publisher and that have been subject to replication but not
> delivered because they dont satisfy the filter criteria will only be
> delivered to the publishing subscriber when they do eventally satisfy
> the filter conditions when some other data changes. They will not flow
> throgh to the lowest level subscriber.
> Could there be something that i am missing when setting up the
> republisher. Ihave used a global pull subscription to the Central
> Publisher and a local pull subscription exists between subscriber and
> publishing subscriber
>
|||> So, you blow away the entries on ServerC in MSmerge_genhistory
> corresponding to Customer100's data. The engine kicks off and sees that
> it needs to transfer the updates for Order110 and 145, because they don't
> exist on ServerC. It then throws a bunch of errors, because it's pretty
> hard to replicate the updates when the original row doesn't exist on
> ServerC. (The insert was part of the snapshot, but not issued separately
> and so does not appear in MSmerge_contents.) So, you are stuck.
And if I'm following you correctly. If you didn't "blow away the entries on
ServerC in MSmerge_genhistory", then you'd finally receive Order110 and 145
for Customer100? Is there a way at the publisher to make the system think
there was an update on all of Customer100's orders, just to make sure they
get transferred on the next sync without forcing the subscriber to reinit or
perform some futuristic mini-snapshot? Or are the repication triggers to
clever to allow such a thing to happen?
--Troy
|||No, you wouldn't. That is because Order110 and 145 were updated ONLY.
There is nothing is MSmerge_contents corresponding to the insert. So, even
it you could force the updates to go through by some method, there wouldn't
be anything to update on the subscriber since the row wouldn't exist. No,
there is nothing in the engine to accomplish this. This is one of those
problems I could spend 50 pages explaining and we'd still not cover 1/2 of
the permutations which would have to be solved.
The merge engine logs changes into metadata tables for one reason,
performance. If it did not log the changes, then each synchronization cycle
would be required to do a row by row, column by column comparison between
the publisher and subscriber in order to determine changes. That obviously
incurs unacceptable overhead, so it is simply not done. So, you are left
with 2 very simple facts:
1. The merge engine will only send changes which get logged to
MSmerge_contents and are still there when the synch cycle executes
2. MSmerge_genhistory is used to determine what changes need to be exchanged
between publisher and subscriber as much as it is used to ensure that a
change is exchanged once and only once in order to prevent transactions from
looping multiple times which could destroy other changes.
So, plain and simply, merge replication will not pick up data and forward it
unless it is a change which exists within a generation which has not already
been exchanged between publisher and subscriber. And, there is no way to
reliably force it to do so. (I've been trying to find a method to do this
which works in all cases since Beta 2 of SQL Server 7.0 and I've been
working on this generic problem since the mid-90s without coming up with a
solution that has any chance of performing acceptably.)
Mike
http://www.solidqualitylearning.com
Disclaimer: This communication is an original work and represents my sole
views on the subject. It does not represent the views of any other person
or entity either by inference or direct reference.
"Troy Wolbrink" <wolbrink@.ccci.org> wrote in message
news:Ob0gON0KGHA.2064@.TK2MSFTNGP11.phx.gbl...
> And if I'm following you correctly. If you didn't "blow away the entries
> on ServerC in MSmerge_genhistory", then you'd finally receive Order110 and
> 145 for Customer100? Is there a way at the publisher to make the system
> think there was an update on all of Customer100's orders, just to make
> sure they get transferred on the next sync without forcing the subscriber
> to reinit or perform some futuristic mini-snapshot? Or are the repication
> triggers to clever to allow such a thing to happen?
> --Troy
>
|||OK well seems like i have a bit of a problem in my application if i
continue to use a hierachial merge topology.
I would have thought this issue would have been a reasonable common
situation and very limiting.
Certainly Reinitialisation is not a good solution for my situation for
a number of reasons. We unfortunately have a requirement to run merge
processes quite frequently so i can expect this situation to occurr
quite frequently as a result of my application business objects design.
I guess dummy updates to all related rows at the the time that a
change in assignment occurrs could be a messy workaround.
Thanks for the clarification of how it works. The problem has
certainly been interesting.
Davec
|||Actually i have one more question do you believe this behaviour is
restricted to republisher style topologies (Hierachial). Ie would i
expect the same behaviour in say a central publisher with one level of
subscribers. My tests have shown everything to work correctly in this
situation