I cannot locate the exact resource and therefore I cannot reference it or make sure that the quote is correct, but the thing that caught my attention was something along the lines of the above quote, which I read somewhere online. I had heard about live coding streams in different fora, it sparked my curiosity and decided to check it out.
@noopkat does her live coding stream on Twitch, which I know from my two sons, both are avid gamers and Youtube watchers. There are other outlets for live coding streams, but I have no experience with any of these. I personally find Twitch very accessible and useful, you can watch on the web, they offer a native client or you can watch on your smart phone. I once tried watching on my phone on the train, but the signal was not entirely stable and in the end I have to give up.
Unfortunately @noopkat is always streaming Sundays when I am making dinner, so it is not always I can pay close attention or I pay attention have to pick an easier dish not requiring my complete attention – anyway I am hooked.
The best recommendation I can give is watching in the comfort of your sofa or similar, like old school flow-TV. I once had the stream running on a PS4, were a Twitch client also is available, freeing up my laptop to do something else – actually I find watching live coding inspirational and doing coding myself is parallel or looking up related resources is useful. The chat interface was however open on my computer to I could participate in the live coding stream, since the PS4 keyboard interface is not optimal, more on this later.
For a long time the Internet and the streaming medium has gone towards convenience consumption. You watch what you want, when you want. If you want to binge, you binge and if you want a break, you take a break. So it is sort of weird that live streaming consumption is attractive, since you now have to hurry home to catch the stream, or postpone dinner, much like when all we had was flow-TV with static schedules.
Twitch is primarily focused on gaming and gamers, but a few live coders can be found using the platform. I have watched: @yom_na and @thelarkinn whos stream I caught my first show of today before work. If you want the episode from today, you can hear a shout out to me, since I had to leave for work. And this is where live coding streams differ from regular flow-TV. The social aspect of the live streaming is important and it helps to build up a social relation and sense of community and even the spectators participate in the stream. @yom_na streamed a live coding session fixing issues and PRs in an open source project I am also contributing to, so that was quite educational.
I think I will continue to watch live coding streams, it is fun and stimulating. Next question is whether I should try to do a session myself. The software used by @noopkat: OBS is free and it would be fun to try out. The only issue is that all of the people I mentioned are incredibly talented and I am not sure I would be able to deliver the same high level.
Some time ago I changed my title to Product Manager. For many years I have worked as a developer and later team-lead for a development team, so this was an interesting change.
Working as a team-lead had slowly removed me from actual day to day coding, doing more and more human-resources related tasks and meetings. So when it was suggested to me to play a more active role in the software development, without the team responsibilities I accepted. The only requirement, which was presented to me in my new role was:
– Use your knowledge and know-how to continuously support our software services and products.
I was a bit uneasy with the new role and perhaps mostly the title. Having worked as a developer for a long time, it was hard to loose the techie. I suggested “Technical Product Manager”, but it was denied – I got over it and at that point it really did not matter – after all it was just a title *1
Still fearing that it would move me away from coding, I decided to try to shape my new role to suit me better. The organisation I work for has never had a Product Manager before, so I figured I might as well try to outline my own role.
I started out by examining an idea I had played with for some time, but had not implemented. As a Product Manager I decided it was totally legal to create prototypes to evaluate possible candidates for our service portfolio.
The idea was to handle the problem area of “Terms and Conditions” and communication of these. The problem area can be described in the following way:
The terms and conditions has to be available in a preservable format (I am not a legal specialist, so I do not know the exact wording, but this is the way it was explained to me)
The terms and conditions have to be available to the end-user in the revision/version, originally presented to the user
In addition, the following, more basic, requirements followed:
We want to be able to link to the current terms and conditions, so you can find them for example via a website
We want to be able to link to specific revisions so we can create links for websites
We want to be able to communicate the terms and conditions via email, without sending the complete terms and conditions, but just providing a link
We want to support both danish and english
I boiled together a prototype service to handle exactly these requirements and the prototype can be found on GitHub and on DockerHub.
The solution offers the following:
– Terms and Conditions can be downloaded as a PDF and this has been accepted as a preservable format
– You can link to an exact revision, for building lists for example
– You can link with a date parameter, which will give you the revision relevant for the given date
– You can link to the service and get the current revision of the Terms and Conditions
– You can point to a given translation of the document in the URL by using the language indication ‘da’ for Danish and ‘en’ for English
Lets walk through it:
– Providing PDF files as an asset it pretty easy in any modern web development framework
– The date based query:
Returns terms and conditions active for the specified date. This can be used in email for example where you then can stamp with the current date.
– The revision based query:
Returns current terms and conditions revision 2. This can be used for enumerations and listings or specific deep links.
– The basic query:
Returns current terms and conditions, which can be used for webpages where you want to show the current revision for evaluation by the requester.
– The basic query, supporting another language:
Returns current terms and conditions in danish, can be changed to English by specifying: en instead of da.
All of the available documents are assets to the service, these could be fetched from a database or similar, in the prototype they are just publicly available files.
The prototype solves the problems outlined and gives us an overview of the public facing part, meaning the service and feature parts, but can be improved in many areas, especially in regard to the internals.
– You could annotate the documents, if they are no longer the current revision. My suggestion is to annotate the actual PDF, alternatively the presentation in the browser could take care of this. The current prototype does not handle this.
– Handling the problem of different timezones can be quite complex, my recommendation is to decide on one timezone being the authoritative timezone
– The algorithm for resolution could be optimised
– The representation of the terms and conditions artefacts in the back-end could be moved to a databased
– The date parameter is a weak point and the parameter handling could also be improved, at the same time, we expect to label the URL, resulting in a query, with a dateformat we already know
The prototype even holds a bonus feature, due to the way the central algorithm works, you can actually add an asset in advance. It will not be served as the current revision of the terms and conditions until it’s startdate is passed. This means that nobody has to work on New Year’s Eve to publish the new revision of the terms and conditions for availability on the 1st. of January.
These can of course be retrieved based on the revision. Handling of this could be implemented, but I actually consider this as a good thing, since it means that you can test the application, without jumping too many hoops.
I have never worked much with prototypes on a larger scale before, but using my boring stack, it was actually quite fast to get something to work, it would shed light on interesting aspects of the UX and the internal implementation, like the main algorithm and finally it provided a proof of concept, which could spark new ideas.
Becoming a product manager is hard, but it does not necessarily mean that you have to be removed from coding. Prototyping is a lot of fun and it is most certainly not the last time I have approached a problem area in this way.
*1 titles changes can backfire and ever since I changed my title on Linkedin I have received a lot of Product Manager related stuff
I have just attended my first ever DockerCon, I was so lucky, the conference was taking place in my hometown – Copenhagen.
It was quite awesome, I have recently attended GOTO Copenhagen at the same venue, but DockerCon was a lot bigger, with more many tracks, sessions, exhibitors and of course attendees. I have attended tool focused tech conferences before, but primarily conferences, but this reminded me of OSCON.
About attendees DockerCon did something very cool. By facilitating a hallway track, where you could either invite other users or see what other users wanted to talk about and then make contact. This put me in contact with some other developers and we could exchange experiences and war stories.
The sunday before the conference I attended a hackathon organised by the local Docker User Group and one of the exhibitors (Crate.io), so I actually got to meet some of the other attendees in advance. So for the first hallway track talk I attended, I met a familiar face. Later on I met complete strangers, but it was really interesting to just meet and talk about software development and Docker.
The overall focus of the conference was very much on the operational part, integration of legacy Windows and Java apps and orchestration systems like Kubernetes, Mesos, Swarm etc.
I still feel a bit like a Docker n00b, but attending a talk by @abbyfuller showed me that I at least am getting much of the image construction right, still picked up a lot of good information and it is always good to attend conference to get your knowledge consolidated and debugged.
Another very good talk by @adrianmouat was entitled: “Tips and Tricks of the Captains”, this presentation was jam-packed with good advice and small hacks to make your day to day work with Docker more streamlined. Do check out the linked slides.
I attended a lot of talks and I got a lot of information, it will take me some time to get the notes clarified and translated into actionable items, I can however mention:
– freezing of contains for debugging
– multi stage builds
– improved security for running containers (user id setting) and use of tmpfs for mount points
– The scratch image
In addition to the talks I visited a lot of exhibitors. I made a plan of exhibitors to visit based on our current platform at work. My conclusion is that Docker is there to stay and the integrations being offered are truly leveraging container technology making it more and more interesting to evaluate in context of using Docker in production. Currently we only use it for production, next step to evaluate is test and QA.
Many of the companies making Docker integrations even offer their projects as open source, such as Crate.io with Cratedb and conjur from CyberArk – I had never heard of these companies before. Crate.io sponsored the sunday hackathon and has a very interesting database product. CyberArk’s conjur is aimed at secret sharing, an issue many of us face.
Apart from the list above and the interesting products (not only open source). The whole conference spun off a lot of ideas for other things I need to investigate, implement, evaluate and try out:
– Debugging containers (I have seen this done in the keynote from DockerCon 2016
– Docker integration with Jenkins for CI, there is a plugin of sorts
I plan to follow up on this blog post with some more posts on Docker, the motto of the conference something about learning and sharing – that was most certainly also practiced, so I decided I will give my two cents over the following months.
Following some of all the cool developers on twitter, GitHub, blogs etc. I fell over EditorConfig. The homepage of the project boldly stated:
EditorConfig helps developers define and maintain consistent coding styles between different editors and IDEs. The EditorConfig project consists of a file format for defining coding styles and a collection of text editor plugins that enable editors to read the file format and adhere to defined styles. EditorConfig files are easily readable and they work nicely with version control systems.
I primarily use perltidy for my Perl projects and I have used other pretty printers in the past, so I understood what it wanted to do, but it seemed so general it did not really bring any value, not being able to replace perltidy or similar, so I disregarded it as a fad.
Anyway EditorConfig kept popping up in the projects I was looking at so I decided to give it a second chance. I am not doing a lot of projects with a lot of different languages involved, but all projects does contain some source code, possibly some Markdown and some other files in common formats etc.
The formatting capabilities of EditorConfig are pretty basic, since it does not go into deep formatting details for all the languages out there, which would also be incredibly ambitious, but basic formatting like indentation size and format, encoding, EOL and EOF style. This seemed pretty useful for the files where I could not control format using perltidy, so it would be a welcome extension to my toolbox.
Luckily a prolific Github contributor Sindre Sorhus had implemented a plugin for SublimeText (my current editor of choice). So I installed the plugin and got it configured for some of my projects and started using it.
Apart from the editor part you simply place a configuration file in your project named: .editorconfig, configure it to handle the languages contained in you project and you are good to go.
The problem, well not really a problem, but common misunderstanding is that it reformats ALL your code. It does NOT. It only works on newly added lines. At first you might be disappointed, but just opening your editor with an active plugin should not mean that all your code has to be recommitted with extensive diffs confusing everybody (and yourself) – so this is actually a reasonable constraint.
Anyway at some point, you might want to reformat everything, to get a common baseline. Here eclint can help you, eclint is available on Github. eclint can both work as a linter, meaning it checks your adherence to the configuration (editorconfig) specified, but it can also apply it.
$ eclint check yourfile
$ eclint fix yourfile
EditorConfig can help you keep you own formatting consistent for some of the more esoteric file formats and when contributing to other peoples projects, you do not have to go back and forth over formatting issues, well you might, but the EditorConfig controllable parts will be aligned. Check the website and read up on integration with your editor of choice.
eclint can help you establish a formatting baseline for your own projects, but do read the documentation and do not mix it up with your regular development or yak-shaving, since you could face large diffs.
The Copenhagen edition of the GOTO conference have come to an end. I was able to attend 2 of the 3 days scheduled. I decided beforehand not to sign up for any tutorials since I new it would be difficult to take so much time away from work assignments. As anticipated I ended up having to skip the Tuesday sessions due to work priorities and constraints. I am glad that the conference is in Copenhagen, but perhaps going abroad would mean less interference, then again I would probably be caught in some hotel room doing Skype sessions.
When it it comes to conference attending and the like and taking time off to go to these things, network and reflect and learn. I find this incredibly important and I used to do it a lot more. At the same time I find it important to hold a balance between obtaining these stimuli and possibly executing on them, by applying newly learned techniques, tools and practices to your daily work. On the other hand often daily work seems to follow into certain almost static routines and die hard practices, if not scrutinised and challenged. In addition it would be awesome if you could set aside time to experiment with all the stuff you cannot squeeze into your daily work routine.
Now on to the actual content I will try to give a brief overview of my observations from the conference based on the notes I jotted down. I will not attempt to give a complete account, but some of the more interesting things will be mentioned. I encourage you to checkout the GOTO Play app if you want to watch the videos of the actual talks and most of them will probably make it to Youtube at some point in the future.
First talk I attended was entitled “SCRUM vs. SAFE”, an interesting talk based in yet another method SAFE, which attempts to address some of the short comings in SCRUM adaptation. Such as running siloed SCRUM in agile teams in a waterfall organisation etc. Tomas Eilsø the presenter gave an entertaining presentation with some good examples, so even though it was a method talk, it was not textbook excerpts, but based on Tomas experiences as a fighter pilot. The talk drew parallels to military decentralisation. The presentation also touched topics like, building a environment of trust, using cross-checks to stay safe and sharing of mental models. Indeed a great talks with lots of good points even if you are not into SCRUM or SAFE.
One of the really interesting take aways was the OODA loop, invented by John Boyd – Observation-Orientation-Decision-Action loop or cycle. Which might be interesting in a agile setup for software development and business.
Mark Seeman (@ploeh) gave an excellent talk with the weird title “Pits of Success”. I have been following Mark for some time and even though he works in areas quite different from mine, meaning functional programming and F#, his presentation was awesome, entertaining and insightful. The presentation contained some very nice animations related to the title, be sure to watch the talk if you are intrigued.
The last presentation of that day was on a product named HoverFly and the importance of being able to test an API-driven architecture. HoverFly is a sort of trainable proxy, which can emulate APIs after training. The concept is pretty basic and has been seen before, but it still interested me, since we use a similar pattern in our system, but without the training part, meaning that emulating e.g. 3rd. party APIs is hard work. I plan to spend some more time evaluating HoverFly, to assert whether it could leverage our work in this area.
As mentioned earlier I had to skip the second day, so I have no notes on the talks from Monday.
The last day started out with Adrian Cockroft from Amazon, he is the Chief Cloud Strategist and holds an incredible string resume. He talked about cloud trends of course well founded in AWS, but still with good reflections on the role of cloud, the issues of going into the cloud, primarily the benefits, but also mentioning some of the classical computer problem, which seem to resurface, when new paradigmes, technologies and trends emerge. One could argue that Adrians talk was somewhat a sales pitch, like the HoverFly presentation, but well I did not mind, since the presenters all reflected and provided general insight in their respective topics.
Vijay Reddy from Google gave a presentation on Google Cloud and TensorFlow, much in the same ballgame as the other talks I just mentioned, but again also with a lot of good information and a live demonstration.
A completely different kind of talk was, it was much more theoretical and for me hard to follow, but it was nice with a sort of counter weight to the more concrete, pragmatic presentation. This talk quite philosophical and for me quite hard to follow. But some of the key points even sank in, in my thick skull.
As always GOTO was inspiring, provocative, educational and a complete information overload. Now I will try to see how much of the accumulated information I will be able to convert into something actionable, there most certainly was a lot to reflect on.
I see the term full-stack developer everywhere, this got me thinking:
– What does it even mean?
To recruiters it seem to say: “we want somebody who can do everything“, meaning we want to hire somebody being a perfect match no matter what, perhaps even in context of wherever technology would take us.
To developers however it seems to communicate that a developer is capable of handling the ability to work in all tiers of a given stack, meaning front-end to backend, UX implementation to datamodelling and everything in between.
The two are actually not far from each other, but … what stack are we actually talking about when we say full-stack and is this even possible?
Lets start by examining what a stack is.
There used to be a term, which you do not see so often anymore, since it has somewhat been booted by the full-stack term and that is LAMP (Linux, Apache, MySQL, and PHP, source: Wikipedia).
The top-layer of the stack can be replaced by a suited language like Perl, Python and possibly even Ruby), meaning that the definition of the LAMP stack is not even entirely clear – but this is a basic traditional stack and possibly one, which was and still is dominant in many workplaces, simply due to it’s popularity and wide-spread adoption.
Since however the LAMP stack is a bit ambiguous lets break it down. From an MVC (Model-View-Controller, source: Wikipedia) perspective it would look like this and we get recognisable depiction. This makes it a bit easier to identify the tiers of which the stack is comprised – let be get back to this later…
As mentioned the LAMP stack is still around in some sense, but the classical representation does not really depict the more fine grained stack predominant today.
A 6 Tier Stack
Which brings us to another very predominant stack – the MEAN stack (MongoDB, Express.js, Angular.js and Node.js, source: Wikipedia).
The abstraction of this stack demonstrates a very interesting aspect of the more modern stack – the separation from the operating system. Of course there is a an operating system beneath the stack, but in general it does not play as significant a role as earlier.
This whole discussion on the significance of discussion the operation system as part of the stack is interesting, unfortunately the topic is beyond the scope of this article, so I will leave that stone unturned.
Yes there are alternatives and there possibly as many variants of this traditional stack as there are software teams/developers/PaaS product managers. I have attempted to annotate the 6 layer stack, with some technologies just to give you a picture, there are languages/technologies that I have unintentionally left out, forgotten or have not heard about, fill me in in a comment – but lets move on.
To get back to the question on: “What does full-stack developer even mean?” Building on the figure above, you would have to select a set of technologies to represent the 6 layers of your stack and then you should be able to work in all tiers of a given stack, meaning front-end to back-end, UX implementation to datamodelling and everything in between.
What I have observed is that these stack-jumping super-developers do not really exist.
Based on the above plethora of choices, I do not think it makes sense to talk about full-stack developers in general, unless you are very clear on what stack you are talking about. Yes you could go for the most prominent stack or buy into some specific eco-system. I have chosen not to touch on the Microsoft stack, since I have no knowledge of Microsofts current offerings in this area and the same goes for Java.
I have observed is that developers come from somewhere. They have played around with some technologies, languages, frameworks, applications, operating systems, a stack, while they learned their way around computers. Then they got a job or an education and got exposed to new technologies and possibly even new stacks.
Some developers are be top-down, they originate from the front-end, design, UX/UI or some industry or education related to visual presentation or they just started working with websites and at some point needed some more functionality.
Others are bottom-up developers they originate from Systems administration and operation, which mean they now the OS and they now the applications part and all the operational aspects of modern applications. Perhaps they even originate from database administration meaning they are strong on data-modelling etc.
The type I most often see is the diamond developer. These are classical computer programmers. They program and that is what they do. They originate from classical programming and at some point, they needed work with data so they got exposed to database technology. Later or before they get exposed to the concept of users (other than themselves) and they had to create a proper user interface.
There are of course exceptions to all of the modelled developers I mention above and you can vary the size and contents of the different tiers, where some diamond developers excel in several interpreted and compiled languages, even other language concepts than the traditional procedural/OOP programming paradigms.
No matter where you come from, your perspective is unique and your toolbox is unique.
It is a developer capable of working the different tiers of the stack and who can understand the different paradigms and technologies of which the tiers are comprised at the same time utilise best-practices and embrace requirements and who can consolidate everything into an application (on schedule, budget and with minimal defects and maximum security).
Developers are a special lot. We specialise in our tools, we get thrown a problem and We have to understand the problem and the problem area. Which mean we have to become specialists in this particular problem area. When we solve the problem we get thrown a new problem, perhaps in a new job, gig, project and we have to become specialists again, but we often reuse our tool chain, which all of a sudden become a general tool. As we evolve with problem solving we extend our toolbox – we become specialised generalists or general specialists.
So to rephrase the question, does Full-stack Polyglot Specialised Generalist Developers exist?
I have met many extremely talented, clever and resourceful programmers over the course of my career and I hope to meet many more.
My conclusion is that the term full-stack developer has to take the following into account:
What does the stack comprise of we are talking about
How many variations to the stack have your shop/project made to the stack
What is the prominent stack on the market
Then we have a slight chance of talking about the term full-stack developer and be on the same track, but at the same time this does not even make sense as a general term – developers evolve, stacks change, due to developer evolution.
The stack of yesteryear might be the legacy system you have to work with as your next problem area or you have to work with the untested error prone top-notch newest stack on the block designed especially for the problem at hand.
Stacks are just bundles of tools. Focus on problem solving, extend your toolbox, become a specialist, have a general perspective on technology and choose the best tool for the job. You will encounter numerous stacks, you will by far not become a specialist in all of them, some you will love, some you will hate, but for most you will do both since there are no silver bullets in software development – but there are plenty of problems, tools and the fantastic gratification of providing the solution.
Please note that this post is by no means endorsed by my employer, it is a personal reflection on a strategic move I have participated in, in my line of work as a professional software developer.
The above paragraph, which I felt I a need to write as a part of this blog post, is very aligned with the actual post topic in itself, please read on.
The place where I work have for a long time published specifications on some of the offered services. Either this was done using our CMS or using PDF artefacts from a Wordprocessor.
Both processes where tedious and held several issues:
Content in CMS
Hard to edit longer documents with figures and cross references
Version control was not obvious
Drafts compared to published documents was not used
Version control practically non-existing or external
Document control based on files shares, folders and naming conventions
Manual publication proces
There was probably several other issues I have happily forgotten, like PDF meta-data removal etc.
After having done a lot of open source work in Markdown and on Github and in conjunction with releases of some open source demo clients for our services, I proposed that we published the accompanying public specifications on Github.
This proved to be a very clever move.
It did require some consideration on our side and it was quite a new move to us. Yes we had published specifications for public availability for long a time, but putting these in a public repository was still a new move for us. But the concerns hastily evaporated and the proces became natural and incredibly productive compared to the old processes.
At the time of writing we have 6 open sourced specifications and 3 clients accompanying these and a repository with XSD files supplementing one of the specifications.
Not all of the specifications are finished, but they are out there, so if somebody wants to see what is going on, they are most welcome. We have only received a single pull-request and that is completely okay. We do not want somebody to write our specifications, that is our job, but corrections to sample code, clarifications and of course spelling corrections are most welcome.
Here are some of the pros I have observed:
Using Git and Markdown
Version control is built-in
Markdown is quite powerful and easy to edit
Syntax highlighting of code samples (bash, XML, JSON, text etc.)
The flow resembles a development flow and the toolchain is somewhat the same
Tagging of versions and complete history is available
An engaging proces supporting pull-requests (oh well)
Branching for new editions and proposals for change requests
Currently one of the specification has 4 branches, when evaluation and review is finalized, will be merged onto the master, which can be tagged as the authoritative specification – and this proces is so easy to grasp and complete since it is same proces we use for source code.
One last benefit I really enjoy, one which I think is a bit underestimated is – the contract.
When we publish a public specification, we aim at to be informative, useful, correct, exact, educational, clear and to the point.
This works quite well and since often I find that we refer to the public specifications when discussing topics related to our services and since the quality of these sometime outshine our internal specifications, I often find myself thinking that we should publish much more, much much more.
So revisiting the opening paragraf – ever so often we are afraid and publishing API’s become a side-project. Do not be afraid to publish your specifications and documentation, do not be afraid to use an existing platform and toolchain, the pros outweigh the cons and you will quickly forget all about the old way of doing things and you will find yourself more productive and in the end getting your specifications published will be easier than ever.
Whether you are publishing a website or a PDF document, the information is public, the proces is actually the most important aspects and Github and Markdown REALLY leverage this.
The discussion on how far you can go and how much you can publish is a huge topic and should perhaps be another blog post.