Code for Production not for Development

An anti-pattern I have observed and propably also been the cause of is:

Coding for Development first, Production second

The challenge with this anti-patterns is that it is pretty concealed, it is not done out of evil intententions, infact actually the opposite, but it is an anti-pattern.

The pattern can have the following effects:

  • It introduces unnecessary complexity
  • It can introduce security issues
  • It obscures priorities

This makes for a exceptions of course, where this anti-pattern cannot be observed since it is a part of the product design, this can be features like:

  • ability to run diagnostics
  • enabling of debug logs
  • dry-run mode

My argument on this, is that if these are not a part of your product design, these features are actually feature creep.

We would rather have feature creep, than creepy features

Sorry, that is just a product management joke, let’s move on.

The complexity issue is quite obvious. If the code base has features implemented to support the development proces, debugging or other special traits, which are not a part of the the specified production, extending the code base, the amount of lines of code, conditionals, and logic branches. The means that the code get to hold additional code structures not necessarily part of the designated product and doing test coverage reporting, test and QA might suffer due to the fact the that the code might only be exercised based on the original production specification and special circumstances might then not be touched by these activities – the test coverage might tell us, but it then again, it might not depending on your tooling.

If done properly, this is not necessarily a bad thing and in some situations it is even acceptable, an example could be log levels.

Modern loggers can differenciate between different loglevels ranging from debug to critical diagnostic messages over informative, warning and error messages. Here it is supported as part of the logging strategy and you can decide (configure) if debug messages should only be enabled in test or development and perhaps not production. Such a logging facility would propably be an integration of an external component/framework, being testing separatly and therefor it can be asssumed that it works even for the parts not necesarily enabled for the product in, which it is integrated.

Another issue with coding for development first, is that products or parts of products can be complex and hard to test. This can lead to implementation of code circumventing protective and defensive facilities. If these circumventions are easy to enable by developers they might also be enabled by users in production, which might lead to unexpected security issues or information leaks.

There is a variation of YAGNI (“You aren’t gonna need it”), which is:

If you do not want to use it in production, do not put it there

Blindly adding features, interfaces, parameters, flags etc. to assist in development extends the attack surface of your product. If these parts are not an integral part of the regular production specification, they might slip under the radar of security audits, penetrations tests as for the normal QA and test activities.

An example could be if you do SCRUM or similar and a feature is not described as a story but is implemented and slipped into the product. It might not pass through the normal proces for a Story. There exist a possible remedy, which is peer review, but again the product might be shipped with bloated facilities, which might cause trouble down the line and people will have a hard time understanding and explaining why – since the issues are caused by unofficial parts of the code.

If you go down this path, document and if possible uniform these approaches to the extent possible, do not forget, that this is not hostile code, it is actually put there to help the developers, but it has to be put under the same scrutiny and it’s implementation should follow the same proces as other code changes and additions.

The last one – obscuring of priorities, it a bit special. Developer time is expensive time, sometimes developers come up with features just for them selves, as mentioned, they do so out of good will towards the proces they are developing under and for their teammates – but all in all, the features they impose on the product have not necessarily been a part of the production specification and since it is code, it can hold just as many bugs, security issues and other surprises as the rest of the product, but it might not have been aligned with the other stakeholders in the product development. As mentioned earlier, code might be complex – address the complexity instead of working around it, Hence this might eventually add additonal complexity.

And what might seem like a loop-hole, might end up costing a lot of man hours at some point in the product’s life.

If these features are truly necesarry for development get them out in the open. Spending time implementing these exceptions to the regular flow in order to be able to test or inspect, can cost a lot of man hours, so make sure it also brings the expected value and not just more lines of code.

All that said, implementing this anti-pattern is sometimes necessary and can help developers, with troublesome products or implementations, so sometimes it can be the solution to go anti… but be wary.

Advertisements
Code for Production not for Development

VIEW Based Contracts for your RDBMS

We are building our application on a pretty old school standard stack. It consists of a webapplication on top of a RDBMS.

Yesterday I set out to fix a bug and after chasing the bug in the application layer for a long time, I ended up fixing it in the database, which was fairly easy, did not require any deployment appart from the changes to the database, the benefits of this are; it works right away and it is fairly easy to test.

This lead me to think it was about time I wrote a post about this architectural pattern, with which we have had much success.

Our RDBMS consists of your standard RDBMS data model on 3rd. normal form. On top of this we have encapsulated the database in views.

And that is pretty much it, but let me demonstrate with a basic example.

model

Here we have a basic database model consisting of two tables, one containing address data and the other other table with zipcode data.

CREATE TABLE address (
    street TEXT NOT NULL,
    no TEXT NOT NULL,
    floor TEXT NOTNULL,
    door TEXT NOT NULL,
    zipcode TEXT NOT NULL,
    country TEXT NOT NULL,
    FOREIGN KEY (zipcode) REFERENCES zipcode (zipcode)

);

CREATE TABLE zipcode (
    zipcode TEXT PRIMARY KEY,
    city TEXT NOT NULL
);

The example was implemented on a SQLite database for availability if you want try out the provided examples.

The basic concept is to use the data model directly, but only accessing using views. So we add our first view.

CREATE VIEW zipcode_v1 AS SELECT * FROM zipcode;

Do note the naming convention of keeping the name of the encapsulated table, appending a: “_v1”, more on this later.

zipcode_v1

This is one of the easy ones and currently it just seems silly and like a lot of work, and one of the concerns I have heard to this approach was:

“but our database will be full of objects”

Yes this will at least double the amount of objects in your database, but the benefits of this approach outweigh the maintenance. and the objects serve a purpose.

What could the use case be for this view. Well it could be used to populate a dropdown with zipcodes in web application or similar.

The next natural step to the one-to-one view implementation is the more complex datatypes and hence queries. So of you want to query complete addresses, including the city you would have to do something like:

SELECT
    a.street AS street,
    a.no AS no,
    a.floor AS floor,
    a.door AS door,
    a.country AS country,
    z.zipcode AS zipcode,
    z.city AS city
FROM zipcode z, address a
WHERE z.zipcode = a.zipcode;

What if we could just do:

SELECT
    street,
    no,
    floor,
    door,
    country,
    zipcode,
    city
FROM address_v1;

Well we can, just create a view to help us out.

CREATE VIEW fulladdress_v1 AS SELECT
    a.street AS street,
    a.no AS no,
    a.floor AS floor,
    a.door AS door,
    a.country AS country,
    z.zipcode AS zipcode,
    z.city AS city
FROM zipcode z, address a
WHERE z.zipcode = a.zipcode;

And our database now looks like this:

address_v1

And we are slowly assembling contracts with our RDBMS.

As you can read the naming of the views, the one-to-one implementations reuse the name of the encapsulated table, whereas the the views encapsulating more than one view are named by intent or purpose indicating what they aim to serve.

But is that fast enough?

In a modern RDBMS I would expect the performance hit to be insignificant, I do not however not have the numbers to back this up, perhaps another blog post should shed some light on this. Do note that this is an architectural approach, which hold other benefits and it is not aimed at high performance as such, but maintainability and simplicity.

So now we have full encapsulation of our model.

  1. We can change the model as long as the contract is kept intact

Next up is a somewhat awful and bad example, but imagine that somebody wants to change the model. We do not want to sound too american, so zipcode has to be exchanged for postal code.

PRAGMA foreign_keys=off;
BEGIN TRANSACTION;

DROP VIEW fulladdress_v1;

ALTER TABLE address RENAME TO _address_old;

CREATE TABLE address (
    street TEXT NOT NULL,
    no TEXT NOT NULL,
    floor TEXT NOTNULL,
    door TEXT NOT NULL,
    postal_code TEXT NOT NULL,
    country TEXT NOT NULL,
    FOREIGN KEY (postal_code) REFERENCES zipcode (zipcode)
);

INSERT INTO address (street, no, floor, door, postal_code, country)
    SELECT (street, no, floor, door, zipcode, country)
    FROM _address_old;

CREATE VIEW fulladdress_v1 AS SELECT
    a.street AS street,
    a.no AS no,
    a.floor AS floor,
    a.door AS door,
    a.country AS country,
    z.zipcode AS zipcode,
    z.city AS city
FROM zipcode z, address a
WHERE z.zipcode = a.postal_code;

COMMIT;

PRAGMA foreign_keys=on;

Renaming a column in SQLite is bit cumbersome, please bear with me, selecting another implementation than SQLite would have made the above example shorter.

  1. First we drop the view
  2. We rename the old table
  3. We create the new table with the required column name change
  4. We copy the data from the old table to the new table
  5. We re-create the view encapsulating the change and keeping our contract

address_v1-2

If we were to follow through with the renaming, the zipcode table involved via the foreign key would have to be renamed aswell I have not included this is the example, but the pattern is the same and the benefit is the same, it can be renamed, but the encapsulation keeps the contract intact and our applications using the database will not have to be changed.

This can be quite useful in another use case, considering you have an existing model, which you want to expose to some domain specific area. You then can keep your original model and expose the data in views, where data are presented following the naming of the domain specific area.

All in all everything looks honky-dory. But there are some pitfalls, with the whole View Based Contract approach, to name the most prominent ones:

  1. Naming
  2. Transparency
  3. Maintenance
  4. Information leak

So let us go over these.

Naming is hard, for the one-to-one views you are at the mercy of your model, which is okay, but if you already have bad naming in your model this will be reflected in the naming of your views, so one could decide for eliminating bad naming in the encapsulation layer, which brings us to transparency.

For transparency it is recommended to somewhat keep the names from the original model, since the data model, will often be embedded in the users of your database. Do note we implemented the views on top of an existing data model, so people often resorted to relating to the actual model and not the abstraction/encapsulation – it would be nice if we could stick to the abstractions instead of the implementation for some discussions 🙂

Naming for intention is harder, but resembles a proper abstraction more than the one-to-one mapping. We started using the views for our services to begin with, it did however propagate into our batch components, where it proved quite useful.

We would observe the batch components becomming slimmer, because the decision logic was moved into the database contracts. A script for deleting records would simple just work on a view, where all the records qualified for deletion would be available for processing and the records not qualified for deletion would never be presented to the executing application by the view.

When it comes to maintenance and life-cycle, we delete views, when these get obsolete. This is especially for intent based views and we can see that we have several revisions: “_v1”, “_v2” and “_v3”. When we can see that no applications use “_v1” anymore we simply delete it.

The other example of deletion is when a view implements business rules, which no longer apply and hence should not be available.

As described in the beginning of the article, we could do views like the following:

CREATE VIEW zipcode_v1 AS SELECT * FROM zipcode;

Do note that this approach opens for extensions to the model, being exposed via the contract, if you do not want to have this automatical exposure, your views should be restricted in their implementation only offering the fields you have agreed to in your contract.

The bug I mentioned in the beginning of the article was somewhat related to this sort of information leak, a field, was not propably handled by the encapsulation and hence exposed.

A brief example could be, if our database was extended with information on who inserted an actual record, a created by field so to speak, so instead of doing:

CREATE VIEW zipcode_v1 AS SELECT * FROM zipcode;

We should do:

CREATE VIEW public_zipcode_v1 AS SELECT zipcode, city FROM zipcode;

And you could have a similar view for your internal application defined as follows:

CREATE VIEW public_zipcode_v1 AS SELECT zipcode, city FROM zipcode;

So you now have two views named, with intention and not leaking information beyond our contract. There is of course still the issue of object referencing, since our views do as such not restrict access across object ownership/relations, that is a topic for another blog post, but the approach does expose only the data you are interested in serving via your application and not necessarily your full model and dataset.

But we are using an ORM?

Well point your ORM schema/code generator at the views instead of the actual model. Well this also has some pitfalls. Since not all RDBMS support writable views, so if you are a heavy ORM user with a database that does not support writable views, you might not have much luck with this contract approach. A combination with stored procedures or similar could be the way to go, which reminds me, that I have to mention Haktan Bulut my former manager who introduced me to this approach in an architectural design specification.

The concept is pretty simple, but it seems like a lot of work, I have however come to the conclusion that it is saving us quite a lot of work when it is established and as long as our contracts are sane our applications can be trimmed down:

  • It is easier to understand the interaction between the components using a contractual approach
  • We not expose unnecessary data to our applications

It requires time to encapsulate everything and it takes some effort to maintain, but putting changes to data exposure under control is a good thing in my book, since we always have to think about what we do when we extend or restrict the contracts and last but not least, we can optimize the model without breaking our applications.

VIEW Based Contracts for your RDBMS

Watching live coding – strangely intriguing…

Watching live coding is strangely intriguing…

I cannot locate the exact resource and therefore I cannot reference it or make sure that the quote is correct, but the thing that caught my attention was something along the lines of the above quote, which I read somewhere online. I had heard about live coding streams in different fora, it sparked my curiosity and decided to check it out.

I decided to watch Suz Hinton (@noopkat) after reading Lessons from my first year of live coding on Twitch and hearing an interview with her on the podcast Hansel Minutes.

@noopkat does her live coding stream on Twitch, which I know from my two sons, both are avid gamers and Youtube watchers. There are other outlets for live coding streams, but I have no experience with any of these. I personally find Twitch very accessible and useful, you can watch on the web, they offer a native client or you can watch on your smart phone. I once tried watching on my phone on the train, but the signal was not entirely stable and in the end I have to give up.

Unfortunately @noopkat is always streaming Sundays when I am making dinner, so it is not always I can pay close attention or I pay attention have to pick an easier dish not requiring my complete attention – anyway I am hooked.

The best recommendation I can give is watching in the comfort of your sofa or similar, like old school flow-TV. I once had the stream running on a PS4, were a Twitch client also is available, freeing up my laptop to do something else – actually I find watching live coding inspirational and doing coding myself is parallel or looking up related resources is useful. The chat interface was however open on my computer to I could participate in the live coding stream, since the PS4 keyboard interface is not optimal, more on this later.

For a long time the Internet and the streaming medium has gone towards convenience consumption. You watch what you want, when you want. If you want to binge, you binge and if you want a break, you take a break. So it is sort of weird that live streaming consumption is attractive, since you now have to hurry home to catch the stream, or postpone dinner, much like when all we had was flow-TV with static schedules.

Twitch is primarily focused on gaming and gamers, but a few live coders can be found using the platform. I have watched: @yom_na and @thelarkinn whos stream I caught my first show of today before work. If you want the episode from today, you can hear a shout out to me, since I had to leave for work. And this is where live coding streams differ from regular flow-TV. The social aspect of the live streaming is important and it helps to build up a social relation and sense of community and even the spectators participate in the stream. @yom_na streamed a live coding session fixing issues and PRs in an open source project I am also contributing to, so that was quite educational.

I think I will continue to watch live coding streams, it is fun and stimulating. Next question is whether I should try to do a session myself. The software used by @noopkat: OBS is free and it would be fun to try out. The only issue is that all of the people I mentioned are incredibly talented and I am not sure I would be able to deliver the same high level.

Watching live coding – strangely intriguing…

Terms and Conditions as a Service – literally

Some time ago I changed my title to Product Manager. For many years I have worked as a developer and later team-lead for a development team, so this was an interesting change.

Working as a team-lead had slowly removed me from actual day to day coding, doing more and more human-resources related tasks and meetings. So when it was suggested to me to play a more active role in the software development, without the team responsibilities I accepted. The only requirement, which was presented to me in my new role was:

Use your knowledge and know-how to continuously support our software services and products.

I was a bit uneasy with the new role and perhaps mostly the title. Having worked as a developer for a long time, it was hard to loose the techie. I suggested “Technical Product Manager”, but it was denied – I got over it and at that point it really did not matter – after all it was just a title *1

Still fearing that it would move me away from coding, I decided to try to shape my new role to suit me better. The organisation I work for has never had a Product Manager before, so I figured I might as well try to outline my own role.

I started out by examining an idea I had played with for some time, but had not implemented. As a Product Manager I decided it was totally legal to create prototypes to evaluate possible candidates for our service portfolio.

The idea was to handle the problem area of “Terms and Conditions” and communication of these. The problem area can be described in the following way:

  1. The terms and conditions has to be available in a preservable format (I am not a legal specialist, so I do not know the exact wording, but this is the way it was explained to me)
  2. The terms and conditions have to be available to the end-user in the revision/version, originally presented to the user

In addition, the following, more basic, requirements followed:

  1. We want to be able to link to the current terms and conditions, so you can find them for example via a website
  2. We want to be able to link to specific revisions so we can create links for websites
  3. We want to be able to communicate the terms and conditions via email, without sending the complete terms and conditions, but just providing a link
  4. We want to support both danish and english

I boiled together a prototype service to handle exactly these requirements and the prototype can be found on GitHub and on DockerHub.

The solution offers the following:

– Terms and Conditions can be downloaded as a PDF and this has been accepted as a preservable format
– You can link to an exact revision, for building lists for example
– You can link with a date parameter, which will give you the revision relevant for the given date
– You can link to the service and get the current revision of the Terms and Conditions
– You can point to a given translation of the document in the URL by using the language indication ‘da’ for Danish and ‘en’ for English

Lets walk through it:

– Providing PDF files as an asset it pretty easy in any modern web development framework

– The date based query:
/en/terms_and_conditions/20020611

Returns terms and conditions active for the specified date. This can be used in email for example where you then can stamp with the current date.

– The revision based query:
/en/terms_and_conditions/revision/2

Returns current terms and conditions revision 2. This can be used for enumerations and listings or specific deep links.

– The basic query:
/en/terms_and_conditions

Returns current terms and conditions, which can be used for webpages where you want to show the current revision for evaluation by the requester.

– The basic query, supporting another language:
/da/terms_and_conditions

Returns current terms and conditions in danish, can be changed to English by specifying: en instead of da.

All of the available documents are assets to the service, these could be fetched from a database or similar, in the prototype they are just publicly available files.

The prototype solves the problems outlined and gives us an overview of the public facing part, meaning the service and feature parts, but can be improved in many areas, especially in regard to the internals.

– You could annotate the documents, if they are no longer the current revision. My suggestion is to annotate the actual PDF, alternatively the presentation in the browser could take care of this. The current prototype does not handle this.

– Handling the problem of different timezones can be quite complex, my recommendation is to decide on one timezone being the authoritative timezone

– The algorithm for resolution could be optimised

– The representation of the terms and conditions artefacts in the back-end could be moved to a databased

– The date parameter is a weak point and the parameter handling could also be improved, at the same time, we expect to label the URL, resulting in a query, with a dateformat we already know

The prototype even holds a bonus feature, due to the way the central algorithm works, you can actually add an asset in advance. It will not be served as the current revision of the terms and conditions until it’s startdate is passed. This means that nobody has to work on New Year’s Eve to publish the new revision of the terms and conditions for availability on the 1st. of January.

These can of course be retrieved based on the revision. Handling of this could be implemented, but I actually consider this as a good thing, since it means that you can test the application, without jumping too many hoops.

I have never worked much with prototypes on a larger scale before, but using my boring stack, it was actually quite fast to get something to work, it would shed light on interesting aspects of the UX and the internal implementation, like the main algorithm and finally it provided a proof of concept, which could spark new ideas.

Becoming a product manager is hard, but it does not necessarily mean that you have to be removed from coding. Prototyping is a lot of fun and it is most certainly not the last time I have approached a problem area in this way.

*1 titles changes can backfire and ever since I changed my title on Linkedin I have received a lot of Product Manager related stuff

Terms and Conditions as a Service – literally

DockerCon Europe 2017

I have just attended my first ever DockerCon, I was so lucky, the conference was taking place in my hometown – Copenhagen.

It was quite awesome, I have recently attended GOTO Copenhagen at the same venue, but DockerCon was a lot bigger, with more many tracks, sessions, exhibitors and of course attendees. I have attended tool focused tech conferences before, but primarily conferences, but this reminded me of OSCON.

About attendees DockerCon did something very cool. By facilitating a hallway track, where you could either invite other users or see what other users wanted to talk about and then make contact. This put me in contact with some other developers and we could exchange experiences and war stories.

The sunday before the conference I attended a hackathon organised by the local Docker User Group and one of the exhibitors (Crate.io), so I actually got to meet some of the other attendees in advance. So for the first hallway track talk I attended, I met a familiar face. Later on I met complete strangers, but it was really interesting to just meet and talk about software development and Docker.

The overall focus of the conference was very much on the operational part, integration of legacy Windows and Java apps and orchestration systems like Kubernetes, Mesos, Swarm etc.

I still feel a bit like a Docker n00b, but attending a talk by @abbyfuller showed me that I at least am getting much of the image construction right, still picked up a lot of good information and it is always good to attend conference to get your knowledge consolidated and debugged.

Another very good talk by @adrianmouat was entitled: “Tips and Tricks of the Captains”, this presentation was jam-packed with good advice and small hacks to make your day to day work with Docker more streamlined. Do check out the linked slides.

I attended a lot of talks and I got a lot of information, it will take me some time to get the notes clarified and translated into actionable items, I can however mention:

– freezing of contains for debugging
– multi stage builds
– improved security for running containers (user id setting) and use of tmpfs for mount points
– The scratch image

In addition to the talks I visited a lot of exhibitors. I made a plan of exhibitors to visit based on our current platform at work. My conclusion is that Docker is there to stay and the integrations being offered are truly leveraging container technology making it more and more interesting to evaluate in context of using Docker in production. Currently we only use it for production, next step to evaluate is test and QA.

Many of the companies making Docker integrations even offer their projects as open source, such as Crate.io with Cratedb and conjur from CyberArk – I had never heard of these companies before. Crate.io sponsored the sunday hackathon and has a very interesting database product. CyberArk’s conjur is aimed at secret sharing, an issue many of us face.

Apart from the list above and the interesting products (not only open source). The whole conference spun off a lot of ideas for other things I need to investigate, implement, evaluate and try out:

– Debugging containers (I have seen this done in the keynote from DockerCon 2016
– Docker integration with Jenkins for CI, there is a plugin of sorts

I plan to follow up on this blog post with some more posts on Docker, the motto of the conference something about learning and sharing – that was most certainly also practiced, so I decided I will give my two cents over the following months.

DockerCon Europe 2017

SublimeText and EditorConfig and eclint

Following some of all the cool developers on twitter, GitHub, blogs etc. I fell over EditorConfig. The homepage of the project boldly stated:

EditorConfig helps developers define and maintain consistent coding styles between different editors and IDEs. The EditorConfig project consists of a file format for defining coding styles and a collection of text editor plugins that enable editors to read the file format and adhere to defined styles. EditorConfig files are easily readable and they work nicely with version control systems.

I primarily use perltidy for my Perl projects and I have used other pretty printers in the past, so I understood what it wanted to do, but it seemed so general it did not really bring any value, not being able to replace perltidy or similar, so I disregarded it as a fad.

Anyway EditorConfig kept popping up in the projects I was looking at so I decided to give it a second chance. I am not doing a lot of projects with a lot of different languages involved, but all projects does contain some source code, possibly some Markdown and some other files in common formats etc.

The formatting capabilities of EditorConfig are pretty basic, since it does not go into deep formatting details for all the languages out there, which would also be incredibly ambitious, but basic formatting like indentation size and format, encoding, EOL and EOF style. This seemed pretty useful for the files where I could not control format using perltidy, so it would be a welcome extension to my toolbox.

Luckily a prolific Github contributor Sindre Sorhus had implemented a plugin for SublimeText (my current editor of choice). So I installed the plugin and got it configured for some of my projects and started using it.

Apart from the editor part you simply place a configuration file in your project named: .editorconfig, configure it to handle the languages contained in you project and you are good to go.

The problem, well not really a problem, but common misunderstanding is that it reformats ALL your code. It does NOT. It only works on newly added lines. At first you might be disappointed, but just opening your editor with an active plugin should not mean that all your code has to be recommitted with extensive diffs confusing everybody (and yourself) – so this is actually a reasonable constraint.

Anyway at some point, you might want to reformat everything, to get a common baseline. Here eclint can help you, eclint is available on Github. eclint can both work as a linter, meaning it checks your adherence to the configuration (editorconfig) specified, but it can also apply it.

Check:

$ eclint check yourfile

Apply:

$ eclint fix yourfile

EditorConfig can help you keep you own formatting consistent for some of the more esoteric file formats and when contributing to other peoples projects, you do not have to go back and forth over formatting issues, well you might, but the EditorConfig controllable parts will be aligned. Check the website and read up on integration with your editor of choice.

eclint can help you establish a formatting baseline for your own projects, but do read the documentation and do not mix it up with your regular development or yak-shaving, since you could face large diffs.

Happy formatting,

jonasbn

SublimeText and EditorConfig and eclint

GOTO Copenhagen 2017

The Copenhagen edition of the GOTO conference have come to an end. I was able to attend 2 of the 3 days scheduled. I decided beforehand not to sign up for any tutorials since I new it would be difficult to take so much time away from work assignments. As anticipated I ended up having to skip the Tuesday sessions due to work priorities and constraints. I am glad that the conference is in Copenhagen, but perhaps going abroad would mean less interference, then again I would probably be caught in some hotel room doing Skype sessions.

When it it comes to conference attending and the like and taking time off to go to these things, network and reflect and learn. I find this incredibly important and I used to do it a lot more. At the same time I find it important to hold a balance between obtaining these stimuli and possibly executing on them, by applying newly learned techniques, tools and practices to your daily work. On the other hand often daily work seems to follow into certain almost static routines and die hard practices, if not scrutinised and challenged. In addition it would be awesome if you could set aside time to experiment with all the stuff you cannot squeeze into your daily work routine.

Now on to the actual content I will try to give a brief overview of my observations from the conference based on the notes I jotted down. I will not attempt to give a complete account, but some of the more interesting things will be mentioned. I encourage you to checkout the GOTO Play app if you want to watch the videos of the actual talks and most of them will probably make it to Youtube at some point in the future.

First talk I attended was entitled “SCRUM vs. SAFE”, an interesting talk based in yet another method SAFE, which attempts to address some of the short comings in SCRUM adaptation. Such as running siloed SCRUM in agile teams in a waterfall organisation etc. Tomas Eilsø the presenter gave an entertaining presentation with some good examples, so even though it was a method talk, it was not textbook excerpts, but based on Tomas experiences as a fighter pilot. The talk drew parallels to military decentralisation. The presentation also touched topics like, building a environment of trust, using cross-checks to stay safe and sharing of mental models. Indeed a great talks with lots of good points even if you are not into SCRUM or SAFE.

One of the really interesting take aways was the OODA loop, invented by John Boyd –  Observation-Orientation-Decision-Action loop or cycle. Which might be interesting in a agile setup for software development and business.

Mark Seeman (@ploeh) gave an excellent talk with the weird title “Pits of Success”. I have been following Mark for some time and even though he works in areas quite different from mine, meaning functional programming and F#, his presentation was awesome, entertaining and insightful. The presentation contained some very nice animations related to the title, be sure to watch the talk if you are intrigued.

The last presentation of that day was on a product named HoverFly and the importance of being able to test an API-driven architecture. HoverFly is a sort of trainable proxy, which can emulate APIs after training. The concept is pretty basic and has been seen before, but it still interested me, since we use a similar pattern in our system, but without the training part, meaning that emulating e.g. 3rd. party APIs is hard work. I plan to spend some more time evaluating HoverFly, to assert whether it could leverage our work in this area.

As mentioned earlier I had to skip the second day, so I have no notes on the talks from Monday.

The last day started out with Adrian Cockroft from Amazon, he is the Chief Cloud Strategist and holds an incredible string resume. He talked about cloud trends of course well founded in AWS, but still with good reflections on the role of cloud, the issues of going into the cloud, primarily the benefits, but also mentioning some of the classical computer problem, which seem to resurface, when new paradigmes, technologies and trends emerge. One could argue that Adrians talk was somewhat a sales pitch, like the HoverFly presentation, but well I did not mind, since the presenters all reflected and provided general insight in their respective topics.

Vijay Reddy from Google gave a presentation on Google Cloud and TensorFlow, much in the same ballgame as the other talks I just mentioned, but again also with a lot of good information and a live demonstration.

A completely different kind of talk was, it was much more theoretical and for me hard to follow, but it was nice with a sort of counter weight to the more concrete, pragmatic presentation. This talk quite philosophical and for me quite hard to follow. But some of the key points even sank in, in my thick skull.

The talks will all make it to Youtube at some point so keep an eye on the GOTO Youtube channel.

As always GOTO was inspiring, provocative, educational and a complete information overload. Now I will try to see how much of the accumulated information I will be able to convert into something actionable, there most certainly was a lot to reflect on.

GOTO Copenhagen 2017