Wrench value from your Creo Investments Home Measure the benefits Simple Automation can bring Why
A personal way to do PDM Peer to Peer
3D Drawings Model Based
Smooth out unwanted sharp edges Other
www.proetoolbox.co.uk - Simple Automation made Simple

Peer to Peer Demo

The sequence outlined below demonstrates just how flexible a Peer to Peer PDM system can be. I've kept it simple to focus on where the value lies and to explain the terminology better.

The Setup

The first aspect of Peer to Peer PDM is that it's entirely (well almost entirely) client computer based. There doesn't have to be (although there can be) a central 'server'. So since there's no IT to ask permission from and no Finance guy to get money from and no giant committee to figure out the best way of configuring something, all you need to do is press the button and fill out the details.

Peer to Peer refers to the central vault as being a 'repository'. A repository is housed in a disk location. It's uniqueness is guaranteed by the fact that you enter your details into it (these details influence the hashing through which it is easy for GIT to ascertain which file was made by which person even when the filenames are the same). See this 20 Second Video to see how simple setting up a database can be.

An interesting point is that I chose to make a 'Public Repository' at the same time. Even though I'm a single user, I will inevitably need to collaborate with others and integrate their work. Despite the lack of a central dedicated server, it is quite possible to use Peer to Peer PDM to handle collaborative change. You'll see that later.

The 'Workspace'

In a traditional PDM system (you know one with a central server) all transactions are by their nature done over a network. So making version 1.1, 1.2 means you are subject to the networks performance at that time. Peer to Peer PDM is 'checking-in' files locally. It's probably a bit unfair to call it check-in. In fact the distributed version management genius's that invented this way of working really refer to each event of posting a set of files as an individual commit. But when you're experimenting, think about it, do you actually want to send the files into the central server (at which point they are now downloadable by your team mates). Typically the answer to this question is no; most people like to protect their changes but share them when they are ready.

Even in todays world in which we are all seemingly connected full time, it's interesting to reflect on whether this is actually a true advantage. See this 30 Second Video to see how simple committing a set of changes to the repository can be.

I'm sure you'll agree committing feels a lot like check-in. Probably the only difference is this is NOT versioning. Instead I'm making a commit point. Entering a description is critical to Peer to Peer Working, it's not really an option to say oh Version 1, Version 2; you can but think about how meaningless that really is.

'Workspace' Management

Early on in my experiment I got stuck on a really nice feature of Windchill. That of locking files down. I kindof thought this is the most important feature to do and I can't easily do it (PTC don't release all their API's especially in the free ones). However instead I figured that it's quite possible to just take a note such that you don't accidentally commit something in.

It may seem kindof obvious but when you're experimenting there's the stuff you want to change, the stuff you know you shouldn't change and the stuff you make new. I don't think Peer to Peer is going to handle this as nicely as traditional PDM. For example without a central server you can't communicate that something is checked out. Perhaps not having this is actually a benefit, instead you have to open your mouth. I'm a strong beleiver that tools don't design products, people do. See this 90 Second Video to see how a user would go about locking a file and what that really means in the end.

I spent a little extra time in that video showing that locks are merely reminders in Peer to Peer. I don't know if you noticed yet but commit is actually two steps, a) stage the changes and b) commit the changes. It seemed odd to me at first that having an even longer process for check-in could be seen to be an advantage. I've concluded it is for ideation style working. It makes you pause for thought so you're sure you want to commit those changes as a record.

Peer to Peer History

This is where I think the concept of Peer to Peer really comes into shining. Remember we're doing ideation not revision control of individual parts. Really we're all about entire configurations. Whereas a traditional PDM system tracks changes on an object by object basis. Peer to Peer is not really the same, instead we can track which objects changed per commit. This is quite different and it makes for some interesting behaviors.

Each commit is stored into the repository as it's own object. The closest I can imagine to this is Windchill's baseline (you can check-in and make a baseline at the same time by the way). So per commit you can easily see which objects were committed at that time. See this 30 Second Video to see how a simple representation of commit history looks.

The key to why history is interesting of course is the ease with which one can go backwards and forwards through it which is the subject of our next step.

Branching Part 1

Branching is so exciting principally because it is so absent in traditional PDM. At it's simplest it could be 'heck I want to go try and see what could have been'. At it's most impressive it could be parrallel set-based development on a sub-system ready for integrating at a future point (all without polluting the main stream of PDM).

Each commit is stored into the repository as it's own object. The closest I can imagine to this is Windchill's baseline (you can check-in and make a baseline at the same time by the way). So per commit you can easily see which objects were committed at that time. But making a branch in traditional PDM just doesn't work out. Yes you can go back in history but if you change and check-in, you make the next version because traditional PDM is designed to support fully linear processes only. See this 30 Second Video to see how one would go back in time using Peer to Peer PDM methodology.

I kindof trimmed that video a bit too much really; the underlying files were updated to be the historical versions. Like any non-PTC data management system (managing Creo) I can't get access to the update in Session API's. Instead the user has to press the Update in Session button to execute a more lengthy (yes worse) procedure whereby the system erases all from session and then recalls the same filenames back from the workspace.

Branching Part 2

Branching is not only easy but encouraged in Peer to Peer PDM. You would think of an Idea, branch off, try the idea. If you like the idea you may continue it, with an eye to eventually merging it (see later) or if it really isn't a good idea in the end delete it. Remember your experimentation is all done without polluting the server.

Branches could be better called experiments in my mind. I kept the terminology because it's GIT terminology. See this 3 Minute Video to see how one may take advantage of Branching functionality.

So I've really setup the next point which is how do you collaborate when you have no central server. You may have noticed that the last bit I did in the video was to commit nothing more than a note suggestion for User 2 to reference.

Branch Management

I've been involved in CAD admin for some time and there's one statistic that blows my mind even more than how many files folks check-in; that stat is the ratio of the number of files folks check-in to the actual number of files we release. Over 10 times more files are generated than are ever ending up on the market. The clean-up tools present are woefully inadequate, the advice from the vendors is almost always 'why bother disk is free'. Disk may be free but plowing your way through an ever increasing mass of rubbish is unpleasant in every traditional PDM system.

Bottom line, clean-up in a traditional PDM system is hard but in a Peer to Peer PDM system the rubbish never reaches the big server. Even if it does clean-up is painless. See this 15 Second Video to see how quick clean-up can be.

I would caveat though, with power comes great responsibility. You can remove all your work in a few seconds flat.

Collaboration without a central Server and IT

Of all the tasks I was most interested in, this was the big one. How easy or hard would it be to pull changes down that other people had made. It turns out to be not that hard actually. I guess I should have beleived this when reading up about how hundreds of contributors contribute to Linux development using a totally open approach.

First key point to note is that commiting changes only commits them to the local repository; if you want them visible to others, you need to push them into your public repository. Perhaps this is a closer equivalent to the traditional 'check-in' (certainly if the public repository was housed on a centralized network drive within your company and having everyone push to that same repository you could imagine calling this a check-in). I've kept the interactions totally decentralized. See this 5.5 Minute Video to see how files can be round-tripped between users in the most ad-hoc setup imaginable. I like the fact that the tools open up much more interesting patterns for working than just having one central server. Linus Torvalds lamented in his address to Google that you shouldn't trust systems, you should only trust people. Thus why pull changes from a faceless system, far better to pull changes from someone you trust. This also has the added advantage that you can try someones changes out on your timeline not theirs, it also means you can experiment with someones changes without fear of screwing the true current master configuration in the central PDM system. These are all wonderfully powerfull concepts.

User 2 has to be able to reference User 1's Public repository. With no central server or IT this could be as simple as sharing your folder from one windows machine to another OR as complex as mirroring your public repositories to a Google Drive accessible to the affected Users.

To get the data exchanged requires a) users pushing changes to their public repositories, b) users pulling changes from other users public repositories. People are more integral in this process but it's not so manual as to require you to exactly tell them what has changed. The history can be inspected to explain that. Because the repositories know what has changed, pulls and pushes consume far less bandwith than just zipping a folder and sending it would. And because the system places tracking numbers on the files, you know who made what change and when. All you don't actually have is a linear version history. It's up to you to merge the appropriate changes back onto the 'Master' Branch (which is typically considered the current manufacturing release configuration). In many ways this Peer to Peer approach more elegantly solves the age old question of configuration management. You don't worry so much about the individuals, instead it's a matter of the whole.

Where does Peer to Peer PDM fit in

I hope you recognized the key differences between a Peer to Peer PDM approach and a traditional PDM approach. There's advantages and disadvantages to both approaches. Nevertheless I think you'll agree that having tools to handle ideation (in companies that arn't just execution based) will be a great help. Peer to Peer's sweet spot is doing that for basically free.


<<Value PropostionBeta Download>>