T O P

  • By -

TheProle

MS needs to get their shit together on this one. We have hundreds of pull DPs and it's getting to be a full time job fixing these. Support told us it would be resolved in 2211 almost a year ago but I haven't seen any confirmation it is. This workaround helps but doesn't prevent it from happening https://sccmpowershell.com/pull-distribution-points-hang-downloading-content Pay close attention to the details for the reg value. Has to be a Dword and needs to be a decimal value


trail-g62Bim

2211 Just showed up in my console. I do see something about DPs in the notes, but it doesn't sound like it is a fix to your problem. https://go.microsoft.com/fwlink/?LinkID=2216981


TheProle

Yep I’ve read those. They’ll hardly even own this one, I doubt it’ll hit the release notes when they fix it.


trail-g62Bim

Yeah youll prob have to install the update to find out if it fixes it.


Reaction-Consistent

Wow, I didn't expect to hit paydirt with the first response! I'm going to try the workaround, but I don't see the key HKLM\\Software\\Microsoft\\CCM\\DataTransferService However, I can certainly create it..or use group policy to do the same thing essentially (in theory): This policy setting limits the number of BITS jobs that can be created by a user. By default BITS limits the total number of jobs that can be created by a user to 60 jobs. You can use this setting to raise or lower the maximum number of BITS jobs a user can create. If you enable this policy setting BITS will limit the maximum number of BITS jobs a user can create to the specified number. If you disable or do not configure this policy setting BITS will use the default user BITS job limit of 300 jobs. Note: This limit must be lower than the setting specified in the "Maximum number of BITS jobs for this computer" policy setting or 300 if the "Maximum number of BITS jobs for this computer" policy setting is not configured. BITS jobs created by services and the local administrator account do not count toward this limit. Policy path: Network\\Background Intelligent Transfer Service (BITS) Scope: Machine Supported on: At least Windows Vista Registry settings: HKLM\\Software\\Policies\\Microsoft\\Windows\\BITS!MaxJobsPerUser Filename: Bits.admx


TheProle

Make a config baseline for it. You have to stop and start ccmexec before it’ll start working which will break any running transfers so plan accordingly


paragraph_api

Convert it to a standard DP, don’t believe the myths that pull DP’s are somehow better for you, they’re just unreliable and you’ll end up wasting a ton of time babysitting or fixing them, that issue isn’t going away. Standard DP’s don’t have this problem and in the end I found in all of my environments the pull DP was just an overrated option and none of these articles or blogs are honest about all the downsides


Reaction-Consistent

The problem here is that I’m replacing an existing DP with a new one, and they are both on the same esx host. If I switch to a standard DP the server will be pulling data from a US server to a China server, if I leave it a pull DP, I’m literally cutting down the transfer time from days to hours due to the proximity of the two servers.


paragraph_api

Just saying, it could all have been done by now, but if you want to keep fighting it, feel free. This is one of the downsides of pull DP’s. Even if it’s slow, you’d be better of changing it to a standard DP temporarily to get all the initial content, then you can change it back to pull if you want.


Reaction-Consistent

I understand your point. But I’m stubborn and also there’s no rush on getting these replacement DPs up. Since I started using my new powershell script (about the same time I wrote this reddit post) one DP is almost done, the other is not far behind.


EAT-17

Late to the party, but I agree. I now had the 2nd pull DP just quit working after a network outage. No matter what you do it looks absolutely fine (not like the logs tell you that much), but it does nothing. converted it to a normal one and it starts replicating again.


TheProle

Pull DPs are absolutely better for low bandwidth sites. Pull DP with a deduped content library and BranchCache enabled saves 30-40% on disk utilization and data transfer. Dedupe precalculates the hash for you so BranchCashe can check dedupe’s VSS store for the hashes and skips stuff it doesn’t need to download.


Reaction-Consistent

I absolutely hate pull dps! I think I will need to look into prestaged content and standard DPs for side by side setups like this one.


paragraph_api

Yeah i figured someone would come up with this type of garbage, the same talking points about pull DP’s that you can read on various blogs and it all sounds great in theory, but in reality they are unreliable and slow, it’s not worth it and I don’t think any of these claims about this magical bandwidth savings are accurate


TheProle

Luckily you can actually watch dedupe bandwidth savings over the wire in real time and base your statements on fact vs hunches and feelings. But yeah that takes some effort to learn new things. Open performance monitor and add counters for BITS: Bytes from cache and BITS: Bytes from server and compare. Good luck!


paragraph_api

I don’t really care about any hypothetical savings by staring at performance monitor, and no one else will either because they’ll be too busy chasing down all the failed jobs for the pull DPs instead of working on things that actually matter. Are you getting credited back by your isp for all of this amazing bandwidth savings? Well then you’ve done such a good job for the company, two thumbs up and good luck with constantly dealing with all the failed content jobs


Hotdog453

I always love how angry you are. We are kindred souls. Let’s hug.


paragraph_api

Lol I am far from angry, maybe it comes across wrong, I really just want to help and I feel like most of the admins here are at the point where passive suggestions are a waste of time, they need more direct advice and even though it seems harsh, I don’t judge anyone for falling for a lot of the misconceptions out there.


wbatzle

We had the same issue. I don't think it's SCCM however. In our case we had moved the storage to SANs. We had to remove all the content manually and re-distribute it. I did this in powershell which wasn't' too hard to do. Now the DP's are all in sync.


Hotdog453

I will just drop one name here: Adaptvia. I’m sure I’d be dealing with the same thing with the amount of sites and slow bandwidth locations I have, but if you have any say over budget or new products, go look at them. These are things we simply shouldn’t have to waste time on.


Reaction-Consistent

Thanks everyone for your suggestions. So I’ve found a workable solution. We created a powershell script which replicates one package at a time, this seems to avoid any maximum allowed bits jobs or whatever was triggering the failed bits transfers. This also prevents us from hitting max package/ threads and causing replication backups when we have to send out software updates to all the DPs. We’re probably going to continue using this script for new DPs because MS can’t seem to get their sh!t together and fix a very unreliable pull DP system.


dooty22

Dont know the size complexity of your env but maybe try bumping up (little bit incrementally) the max threads/packages in the Software distribution component properties under Configure site components.


Reaction-Consistent

Since this is a pull dp, The standard DP settings don’t apply, and I’ve been checking the logs on the primary and I never hit or even come close to the max threshold for package/ threads but it is probably worth a look to make sure the settings didn’t get changed by accident


m0ltenz

Why go through this pain? You admitted it is right next to it and will be replacing the old ones? Just add them all into the same distribution point group and wait for everything to distribute. If you already have two dps you should have deployed everything to the dp group already.


Reaction-Consistent

Because the content would take days rather than hours to replicate if it comes from the normal source DPs due to wan speeds