Wrench value from your Creo Investments Home Measure the benefits Simple Automation can bring Why
A personal way to do PDM Peer to Peer
3D Drawings Model Based
Smooth out unwanted sharp edges Other
www.proetoolbox.co.uk - Simple Automation made Simple

Peer to Peer Foundation

In November 2014 I made a couple of rough sketches for what such a tool may require. This is a summary of some highlights and decision points along the way.

Even with my strong affinity for freeform design, I still drew out a single master database as the official record for the design data post WIP. Looking back I find that intriguing because in many ways what Peer to Peer can do seems utterly impossible to realistically do with a giant centralized platform. I imagined Creo talking via a simple app to a local database which talked correspondingly to a local vault.

Early on it was clear that part numbers seemed to be a concern. How team members would generate uniqueness. I settled on accepting that users would just name their objects whatever they wanted, renaming would be off the table and something to handle with e.g. Windchill later. That brought up another issue, how would you know that file1.prt was different or the same as file1.prt? It turns out that it is possible with a massively high degree of confidence calculate a fingerprint of any digital file and that fingerprint shall be different. Ultimately I had a lot of doubts that this was workable, it was only because the internet provided some encouragement that I continued.


I started out thinking I'd use peices of GIT. For some reason I beleived that GIT doesn't work for binary files, searches produce all sorts of other horror stories on the internet. So instead of just thinking 'put a wrapper on GIT' I started down the bunny hole of trying to improve on GIT by vaulting pointers to a seperate local vault. This was a fools errand and one which to be honest has been largely solved by cleverer folks than me. I currently beleive going beyond core GIT it likely unneccessary, most Creo files are not the humongous binary files of the internet and can be perfectly reasonably vaulted in a GIT repository configured appropriately to not overly parse binary files. It took me about the whole month to figure that the internet doesn't always tell the whole truth.

The next peice of the stack puzzle was how to run the app. Initially I figured JLink code would be best. After a while it dawned on me that I could instead just leverage node.js as a mini-server on the CAD client and it would serve the web pages which when clicked would operate the GIT commands for me. It was definitely this approach that made the rest of the exercise really more about learning GIT and learning node.js. Node.js is also intriguing because it instantly opens up the possibility of easy client to client communications. Note all of my stack is Free of Charge.


Learning GIT is not for the faint hearted. It's a massive toolset. The coders who made it also like typing commands in (unlike CAD users who click icons). GIT also has a whole different lexicon to consume and understand. At this point in December 2014, I took the approach of biting off peices of a future demonstration that I wished to test, a workflow that seemed to illustrate the point of the tools.


With my desire not to fake it, I did spend a lot of time trying larger 'workspaces'. For example putting 1097 files totalling 250MB into the repository took 30 seconds. This was on an M4300 running windows XP. It'll take considerably longer if i was to do that at work today into our Gigabit ethernet connected Windchill backbone. Why? Because the network transactions are swamping the whole process. This is a point that Linus Torvalds makes so eloquently in his Google Talk. But my amazement in January 2015 didn't stop there, I found for example that switching from a 1000 object workspace to a 3 object workspace was 3 seconds. Switching back 30 seconds admitedly but it was so fast I know that I liked it. Listing files in a workspace 5 seconds. At this point I felt like there was nothing GIT couldn't do.




By mid January I had a basic rough working tool. The UI sucked; I had to manually type in the URLs to get the commands up but the backend was operating GIT (most of the time!). I decided to step back to sanity check this for the first time. I beleieve this is how it stacks up against the Design Exploration Extension which is capable of doing exploration and the Windchill tool which is capable of collaboration, data management and of course more. After that I took a deck along to some folks that use Creo Parametric as part of their jobs and received interest in the idea with the admission that they can't do some of the workflows presented with the standard tools.


In February I decided to step back, put my sales head on and make a proper demo. Of course i had to brush up the UI and get it more working. I pinged a few PTC folks about it around March 2015, but they felt like their solutions were good and there was no gap. I'll admit that I don't beleive that and I'm secretly hoping they stole the concept to make their tools better. That being said I don't see how this can be a direct money earner for anyone since GIT is both simultaneously fantastic and free.

By early June 2015 I was basically as ready as I am even now. It was presented to senior management at PTC but nothing came of it. However since the incidence of new articles on the topic do keep occurring I felt it interesting to post my experiments to the web in the hope it can help someone further. I hope it is of use and you can take use of it.


<<The IdeaValue Proposition>>