Audible Feedback

A long time ago I watched a TV show, which was taking place in a South Korean shipyard. I cannot remember whether it was one of these mega-structure programmes or something similar, but I do remember when the huge cranes in the shipyard where moving they were playing tunes instead of alarms, like when trucks are driving in reverse or similar. I have thought a lot about the rationale behind this and has come to the conclusion, that it was important to create a distinct sound with would stand our from all of the other noise in the shipyard.

Today I work at a workstation with 3 screens, I have a plethora of different windows, editors, terminals, database interfaces, email client etc.

I am using headphones with noise cancellation since I am seated in an open office space and I want to guard my attention span. I listen to music, primarily electronica, without lyrics, since I have discovered that I am more easily distracted, if I listen to tunes with lyrics (If somebody can explain this phenomenon, I am all ears). Some recommendations for “Background Music for Coding” in addition to: musicforprograming.net, which I listen to regularly together with some cool playlists on Spotify).

When coding my eyes are occupied, with source code, script output, compiler/interpreter/toolchain feedback.

At the same time a have long a short processes, running in the front or in the background. These are often jobs, such as:

  • Test runs
  • Docker builds
  • Test database builds

So when I discovered @tara-gibbs hey utility (Gist: hey) I was thrilled and I put it to use, after some time I decided to extend my use of audible feedback.

#!/bin/bash

# REF: https://gist.github.com/tara-gibbs/cf91bc86c580f244de0ae9f5978edaac

/usr/bin/osascript -e "display notification \"$*\" with title \"Hey!\""

/usr/bin/say -v Samantha "Hey! $*"

# Setup: Add this to your /usr/local/bin in a file called hey. chmod +x it to make it an executable.

# Usage: sleep 2 && hey wake up!

This is my local version, including a reference to Tara’s original Gist. Please star Tara’s Gist if you use/like it.

Based on Tara’s script I decided to write a special script for Docker build, named: docker-build (Gist: docker-build), sort of an equivalent of docker-compose.

A disclaimer, these scripts are not implemented for distribution and need tweaking before being able to run on your machine and they are implemented on OSX. It should however be possible to dig up similar utilities on other operatings systems for emitting alerts, voice output and playing sound files, making the scripts work on other platforms.
#!/bin/bash

say -v Samantha "Docker build commencing...";

# Receives: messages and return value from Docker build
function emit() {

    if [ $2 -eq 0 ]
    then
        # green output
        echo -e "\033[92m$1";
    else
        # red output
        echo -e "\033[91m$1";
    fi

    # Resetting colouring
    echo -e "\033[0m";

    hey $1;
}

./docker-build.sh

if [ $? -eq 0 ]
then
    /usr/bin/afplay $HOME/Sounds/success.wav
    emit "Docker build completed" 0
else
    /usr/bin/afplay $HOME/Sounds/warning.wav
    emit "Docker build failed" 1
    exit 1
fi

exit 0

Do note that it uses both differentiated colorized output and sounds based on the success of the Docker build.  All my Docker builds have a docker-build.sh and docker-run.sh (and docker-entrypoint.sh) to avoid entering the same values over and over again for different projects and keeping the projects somewhat uniform – your shell history might not go as far back as your work log. An example using this approach can be found on GitHub.

On the forever fascinating Internet I fell over some sound loops from Science Fiction movies. Finding the “Nostromo Ambient Engine Noise” particularly stimulating I noticed it has some discrete beeps in the background, like something was indicating a pulse or something running and it reminded me of the cranes from the South Korean shipyard, where I have still not been able to dig up any snippets or examples.

Anyway I decided to see if I could incorporate the same sort of pulse in my long running scripts. This sort of solves the issue of me not catching the termination signal, I can hear that the pulse is no longer emitted.

The prototype consists of two scripts (please forgive my very basic Shell-scripting knowledge and style, it is not what I primarily code in, so suggestions for improvements are always welcome).

alive.sh emits a sound every to seconds and runs forever.

#!/bin/bash

while [ 1 ]; do

/usr/bin/afplay $HOME/Sounds/alive.wav;

sleep 2;

done

exit 0

fork.sh forks of alive-beep.sh and does what ever is is supposed to do, when done it terminates (kill) alive-beep.sh and completes with emitting a sound indicating that it has finished, as for the docker-build script this could be changed into reflecting the outcome of the run, differentiating on sound and color.

#!/bin/bash

alive-beep.sh &

sleep 6;

kill $!

/usr/bin/afplay $HOME/Sounds/success.wav

exit 0

If we apply this to docker-build, it could look like the following:

#!/bin/bash

say -v Samantha "Docker build commencing...";

# Receives: messages and return value from Docker build
function emit() {

    if [ $2 -eq 0 ]
    then
        # green output
        echo -e "\033[92m$1";
    else
        # red output
        echo -e "\033[91m$1";
    fi

    # Resetting colouring
    echo -e "\033[0m";

    hey $1;
}

alive-beep.sh &

./docker-build.sh

kill $!

if [ $? -eq 0 ]
then
    /usr/bin/afplay $HOME/Sounds/success.wav
    emit "Docker build completed" 0
else
    /usr/bin/afplay $HOME/Sounds/warning.wav
    emit "Docker build failed" 1
    exit 1
fi

exit 0

(Gist: docker-build)

I am far from done with experimenting with audible output, I need to try out other sounds, differentiated volume and general integration with more of my scripts.

If this article has inspired you and you come up with something cool, let me know. If you find issues with the scripts, please let me know.

Advertisements
Audible Feedback

Code for Production not for Development

An anti-pattern I have observed and propably also been the cause of is:

Coding for Development first, Production second

The challenge with this anti-patterns is that it is pretty concealed, it is not done out of evil intententions, infact actually the opposite, but it is an anti-pattern.

The pattern can have the following effects:

  • It introduces unnecessary complexity
  • It can introduce security issues
  • It obscures priorities

This makes for a exceptions of course, where this anti-pattern cannot be observed since it is a part of the product design, this can be features like:

  • ability to run diagnostics
  • enabling of debug logs
  • dry-run mode

My argument on this, is that if these are not a part of your product design, these features are actually feature creep.

We would rather have feature creep, than creepy features

Sorry, that is just a product management joke, let’s move on.

The complexity issue is quite obvious. If the code base has features implemented to support the development proces, debugging or other special traits, which are not a part of the the specified production, extending the code base, the amount of lines of code, conditionals, and logic branches. The means that the code get to hold additional code structures not necessarily part of the designated product and doing test coverage reporting, test and QA might suffer due to the fact the that the code might only be exercised based on the original production specification and special circumstances might then not be touched by these activities – the test coverage might tell us, but it then again, it might not depending on your tooling.

If done properly, this is not necessarily a bad thing and in some situations it is even acceptable, an example could be log levels.

Modern loggers can differenciate between different loglevels ranging from debug to critical diagnostic messages over informative, warning and error messages. Here it is supported as part of the logging strategy and you can decide (configure) if debug messages should only be enabled in test or development and perhaps not production. Such a logging facility would propably be an integration of an external component/framework, being testing separatly and therefor it can be asssumed that it works even for the parts not necesarily enabled for the product in, which it is integrated.

Another issue with coding for development first, is that products or parts of products can be complex and hard to test. This can lead to implementation of code circumventing protective and defensive facilities. If these circumventions are easy to enable by developers they might also be enabled by users in production, which might lead to unexpected security issues or information leaks.

There is a variation of YAGNI (“You aren’t gonna need it”), which is:

If you do not want to use it in production, do not put it there

Blindly adding features, interfaces, parameters, flags etc. to assist in development extends the attack surface of your product. If these parts are not an integral part of the regular production specification, they might slip under the radar of security audits, penetrations tests as for the normal QA and test activities.

An example could be if you do SCRUM or similar and a feature is not described as a story but is implemented and slipped into the product. It might not pass through the normal proces for a Story. There exist a possible remedy, which is peer review, but again the product might be shipped with bloated facilities, which might cause trouble down the line and people will have a hard time understanding and explaining why – since the issues are caused by unofficial parts of the code.

If you go down this path, document and if possible uniform these approaches to the extent possible, do not forget, that this is not hostile code, it is actually put there to help the developers, but it has to be put under the same scrutiny and it’s implementation should follow the same proces as other code changes and additions.

The last one – obscuring of priorities, it a bit special. Developer time is expensive time, sometimes developers come up with features just for them selves, as mentioned, they do so out of good will towards the proces they are developing under and for their teammates – but all in all, the features they impose on the product have not necessarily been a part of the production specification and since it is code, it can hold just as many bugs, security issues and other surprises as the rest of the product, but it might not have been aligned with the other stakeholders in the product development. As mentioned earlier, code might be complex – address the complexity instead of working around it, Hence this might eventually add additonal complexity.

And what might seem like a loop-hole, might end up costing a lot of man hours at some point in the product’s life.

If these features are truly necesarry for development get them out in the open. Spending time implementing these exceptions to the regular flow in order to be able to test or inspect, can cost a lot of man hours, so make sure it also brings the expected value and not just more lines of code.

All that said, implementing this anti-pattern is sometimes necessary and can help developers, with troublesome products or implementations, so sometimes it can be the solution to go anti… but be wary.

Code for Production not for Development

VIEW Based Contracts for your RDBMS

We are building our application on a pretty old school standard stack. It consists of a webapplication on top of a RDBMS.

Yesterday I set out to fix a bug and after chasing the bug in the application layer for a long time, I ended up fixing it in the database, which was fairly easy, did not require any deployment appart from the changes to the database, the benefits of this are; it works right away and it is fairly easy to test.

This lead me to think it was about time I wrote a post about this architectural pattern, with which we have had much success.

Our RDBMS consists of your standard RDBMS data model on 3rd. normal form. On top of this we have encapsulated the database in views.

And that is pretty much it, but let me demonstrate with a basic example.

model

Here we have a basic database model consisting of two tables, one containing address data and the other other table with zipcode data.

CREATE TABLE address (
    street TEXT NOT NULL,
    no TEXT NOT NULL,
    floor TEXT NOTNULL,
    door TEXT NOT NULL,
    zipcode TEXT NOT NULL,
    country TEXT NOT NULL,
    FOREIGN KEY (zipcode) REFERENCES zipcode (zipcode)

);

CREATE TABLE zipcode (
    zipcode TEXT PRIMARY KEY,
    city TEXT NOT NULL
);

The example was implemented on a SQLite database for availability if you want try out the provided examples.

The basic concept is to use the data model directly, but only accessing using views. So we add our first view.

CREATE VIEW zipcode_v1 AS SELECT * FROM zipcode;

Do note the naming convention of keeping the name of the encapsulated table, appending a: “_v1”, more on this later.

zipcode_v1

This is one of the easy ones and currently it just seems silly and like a lot of work, and one of the concerns I have heard to this approach was:

“but our database will be full of objects”

Yes this will at least double the amount of objects in your database, but the benefits of this approach outweigh the maintenance. and the objects serve a purpose.

What could the use case be for this view. Well it could be used to populate a dropdown with zipcodes in web application or similar.

The next natural step to the one-to-one view implementation is the more complex datatypes and hence queries. So of you want to query complete addresses, including the city you would have to do something like:

SELECT
    a.street AS street,
    a.no AS no,
    a.floor AS floor,
    a.door AS door,
    a.country AS country,
    z.zipcode AS zipcode,
    z.city AS city
FROM zipcode z, address a
WHERE z.zipcode = a.zipcode;

What if we could just do:

SELECT
    street,
    no,
    floor,
    door,
    country,
    zipcode,
    city
FROM address_v1;

Well we can, just create a view to help us out.

CREATE VIEW fulladdress_v1 AS SELECT
    a.street AS street,
    a.no AS no,
    a.floor AS floor,
    a.door AS door,
    a.country AS country,
    z.zipcode AS zipcode,
    z.city AS city
FROM zipcode z, address a
WHERE z.zipcode = a.zipcode;

And our database now looks like this:

address_v1

And we are slowly assembling contracts with our RDBMS.

As you can read the naming of the views, the one-to-one implementations reuse the name of the encapsulated table, whereas the the views encapsulating more than one view are named by intent or purpose indicating what they aim to serve.

But is that fast enough?

In a modern RDBMS I would expect the performance hit to be insignificant, I do not however not have the numbers to back this up, perhaps another blog post should shed some light on this. Do note that this is an architectural approach, which hold other benefits and it is not aimed at high performance as such, but maintainability and simplicity.

So now we have full encapsulation of our model.

  1. We can change the model as long as the contract is kept intact

Next up is a somewhat awful and bad example, but imagine that somebody wants to change the model. We do not want to sound too american, so zipcode has to be exchanged for postal code.

PRAGMA foreign_keys=off;
BEGIN TRANSACTION;

DROP VIEW fulladdress_v1;

ALTER TABLE address RENAME TO _address_old;

CREATE TABLE address (
    street TEXT NOT NULL,
    no TEXT NOT NULL,
    floor TEXT NOTNULL,
    door TEXT NOT NULL,
    postal_code TEXT NOT NULL,
    country TEXT NOT NULL,
    FOREIGN KEY (postal_code) REFERENCES zipcode (zipcode)
);

INSERT INTO address (street, no, floor, door, postal_code, country)
    SELECT (street, no, floor, door, zipcode, country)
    FROM _address_old;

CREATE VIEW fulladdress_v1 AS SELECT
    a.street AS street,
    a.no AS no,
    a.floor AS floor,
    a.door AS door,
    a.country AS country,
    z.zipcode AS zipcode,
    z.city AS city
FROM zipcode z, address a
WHERE z.zipcode = a.postal_code;

COMMIT;

PRAGMA foreign_keys=on;

Renaming a column in SQLite is bit cumbersome, please bear with me, selecting another implementation than SQLite would have made the above example shorter.

  1. First we drop the view
  2. We rename the old table
  3. We create the new table with the required column name change
  4. We copy the data from the old table to the new table
  5. We re-create the view encapsulating the change and keeping our contract

address_v1-2

If we were to follow through with the renaming, the zipcode table involved via the foreign key would have to be renamed aswell I have not included this is the example, but the pattern is the same and the benefit is the same, it can be renamed, but the encapsulation keeps the contract intact and our applications using the database will not have to be changed.

This can be quite useful in another use case, considering you have an existing model, which you want to expose to some domain specific area. You then can keep your original model and expose the data in views, where data are presented following the naming of the domain specific area.

All in all everything looks honky-dory. But there are some pitfalls, with the whole View Based Contract approach, to name the most prominent ones:

  1. Naming
  2. Transparency
  3. Maintenance
  4. Information leak

So let us go over these.

Naming is hard, for the one-to-one views you are at the mercy of your model, which is okay, but if you already have bad naming in your model this will be reflected in the naming of your views, so one could decide for eliminating bad naming in the encapsulation layer, which brings us to transparency.

For transparency it is recommended to somewhat keep the names from the original model, since the data model, will often be embedded in the users of your database. Do note we implemented the views on top of an existing data model, so people often resorted to relating to the actual model and not the abstraction/encapsulation – it would be nice if we could stick to the abstractions instead of the implementation for some discussions 🙂

Naming for intention is harder, but resembles a proper abstraction more than the one-to-one mapping. We started using the views for our services to begin with, it did however propagate into our batch components, where it proved quite useful.

We would observe the batch components becomming slimmer, because the decision logic was moved into the database contracts. A script for deleting records would simple just work on a view, where all the records qualified for deletion would be available for processing and the records not qualified for deletion would never be presented to the executing application by the view.

When it comes to maintenance and life-cycle, we delete views, when these get obsolete. This is especially for intent based views and we can see that we have several revisions: “_v1”, “_v2” and “_v3”. When we can see that no applications use “_v1” anymore we simply delete it.

The other example of deletion is when a view implements business rules, which no longer apply and hence should not be available.

As described in the beginning of the article, we could do views like the following:

CREATE VIEW zipcode_v1 AS SELECT * FROM zipcode;

Do note that this approach opens for extensions to the model, being exposed via the contract, if you do not want to have this automatical exposure, your views should be restricted in their implementation only offering the fields you have agreed to in your contract.

The bug I mentioned in the beginning of the article was somewhat related to this sort of information leak, a field, was not propably handled by the encapsulation and hence exposed.

A brief example could be, if our database was extended with information on who inserted an actual record, a created by field so to speak, so instead of doing:

CREATE VIEW zipcode_v1 AS SELECT * FROM zipcode;

We should do:

CREATE VIEW public_zipcode_v1 AS SELECT zipcode, city FROM zipcode;

And you could have a similar view for your internal application defined as follows:

CREATE VIEW public_zipcode_v1 AS SELECT zipcode, city FROM zipcode;

So you now have two views named, with intention and not leaking information beyond our contract. There is of course still the issue of object referencing, since our views do as such not restrict access across object ownership/relations, that is a topic for another blog post, but the approach does expose only the data you are interested in serving via your application and not necessarily your full model and dataset.

But we are using an ORM?

Well point your ORM schema/code generator at the views instead of the actual model. Well this also has some pitfalls. Since not all RDBMS support writable views, so if you are a heavy ORM user with a database that does not support writable views, you might not have much luck with this contract approach. A combination with stored procedures or similar could be the way to go, which reminds me, that I have to mention Haktan Bulut my former manager who introduced me to this approach in an architectural design specification.

The concept is pretty simple, but it seems like a lot of work, I have however come to the conclusion that it is saving us quite a lot of work when it is established and as long as our contracts are sane our applications can be trimmed down:

  • It is easier to understand the interaction between the components using a contractual approach
  • We not expose unnecessary data to our applications

It requires time to encapsulate everything and it takes some effort to maintain, but putting changes to data exposure under control is a good thing in my book, since we always have to think about what we do when we extend or restrict the contracts and last but not least, we can optimize the model without breaking our applications.

VIEW Based Contracts for your RDBMS

Date::Holidays releases – adapter pattern at large

Since the post on the release 1.10 of Date::Holidays I have released:

  • 1.11 Improved support for Date::Holidays::SK
  • 1.12 Improved support for Date::Holidays::USFederal as US
  • 1.13 Support for Date::Holidays::CA_ES, via Date::Holidays::ES
  • 1.14 Marking of Date::Holidays::UK and Date::Holidays::UK::EnglandAndWales as unsupported, using Date::Holidays::GB instead
  • 1.15 Improved support for Date::Holidays::DE
  • and 1.16 Support for Date::Holidays::CZ

And I have more releases in the pipeline.

All of this work started out primarily as an attempt at getting to the bottom of the issue list, new issues do pop up as I get around the different corners and adaptations, but that is perfectly okay and I might never get to the bottom of the issue list, but at least the Date::Holidays distribution will improve and stabilise.

The work is also caused by a change in perspective, where my original motivation was to create a way to consolidate and use all of the different Date::Holidays::* distributions without having to adjust the differing interfaces of all of them.

I just spotted that my documentation lacks a section on motivation, describing the why of Date::Holidays – one more thing to the issue list.

The new perspective is that many of the distributions are not really being updated (which is a pity), but instead of creating a patch for the relevant distributions I am adjusting the adapters in Date::Holidays to implement the lacking features where possible instead of sending patches to the authors of respective distributions – I might do this afterwards, but since this will require a lot of effort, the other way around is faster and easier in most cases. Unfortunately there is also the chance that the original authors are unresponsive and my patches will never be released, so the strategy could be described as a “better safe than sorry” implementation.

About faster and easier, I am creating small tasks, which can be accomplished, while commuting. I have always been inspired by Ricardo Signes (GitHub), who I think have coded more commuting, than I have in front of my computer. This might be a slight exaggeration and Ricardo must correct me if I my description of this is out of proportion, at the same time there is nothing like a good programmer’s myth – well Ricardo is truly a prolific CPAN contributor and he does as such not require a myth.

Anyway – I am not a regular commuter and I will primarily be biking over the summer, so this will possibly not continue with this frequency, but I do enjoy bite-size tasks and I will try to squeeze them in when I can.

The whole change of perspective (and my reading of “Clean Code” by Robert C. Martin) has engaged me a lot and as I deal with all the practical issues I am also giving some thought to the bigger picture.

Date::Holidays have always and will always be a side project, once in a while I get contacted by somebody who use it or is trying it out and it makes me immensely happy, but I am not fooling myself – I do not think Date::Holidays has a big audience. I do get PRs with new implementations and that is awesome and I will keeping pushing for more integrations and adaptations, but it does not change the fact that the user base is limited.

So why do I keep coding on Date::Holidays?

Well, it is incredibly educational.

When I started out it was an exercise in the Adapter Pattern and it probably still is, but things have changed. Now I am reading “Clean Code” as mentioned and I am trying to adopt and learn some of the principles from that book.

Which leads me to the new road map for Date::Holidays, which is slowly taking shape.

  • Adopt some “Clean Code” principles
  • Factor out the “nocheck” flag (this is one of the principles)
  • Factor out general features working on implicit country lists, this does simply not belong in the class (this might be one of the principles)
  • Evaluate possible adoption of format parameter from Date::Holidays::* distributions
  • Evaluate possible implement localisation of data from Date::Holidays::* distributions

So for now there will be a lot of smaller releases improving the actual adapters and at some point, I will look into making a major release, taking Date::Holidays to the next level, with a lot of clean code and hopefully I will be learning a lot during the proces.

Date::Holidays releases – adapter pattern at large

Team-octopus – an anti pattern?

Many of our daily stand-up meetings and much of the daily over the desk communication sound like:

– “Somebody needs to update the QA database

– ”The regression test fails
– “What version of the application are your testing, did you make sure it is the latest

And the classic:

– “I have installed the components locally and it works on my machine

We also have more severe issues, in SCRUM these are called impediments, these are for the SCRUM Master to handle and might require help from people outside the SCRUM team or even escalation.

All of the examples I gave above are not at this level, they can be handled by the team and they should be handled by the team, but sometimes they fall between two chairs. In order to not confuse these with impediments I will call these obstacles, they can be overcome, but do require some effort.

Now lets set the stage.

We are a small team. We are all experts in some areas and weaker in others. At the same time our team profile consists of a set of functional roles, making sure that all the required aspects of our software development strategy are covered.

The team has grown over time and from time to time we have had consultants working as team members. Some of us have been with the company for a long time and others shorter. This mean that apart from what the individuals bring to the team based on education and prior experience, accumulated knowledge from our company also plays an important part on what and how we contribute individually.

All in all this gives a nice balance and after many failures, experiments, constellations, planning, retrospectives, changes, adjustments we have a good proces for developing software.

But it is not perfect.

Our biggest hurdle and what actually means that we have a low bus number for some parts, is the stuff that falls between chairs, the tasks that are on nobody’s job description, the stuff that glues everything together, the invisible workings of a modern software team.

Our team comprises of the following:

  1. – 1 Team Manager
  2. – 1 Front end UI / UX developer

– 3 developers
– 2 testers, where 1 also acts as test-manager        
– 1 Product Manager – that is me 🙂

The product manager and front end developer design the features and the developers develop and testers test. Our team manager sees to that operational issues are addressed in balance with new features and everything is pretty smooth sailing – well apart from what seems to be a never ending backlog, but that seems to be a part of the game.

This is a good setup for a making software releases, bug fixes, new features, but it does require that the infrastructure for developing software is in place. Here I mean:

– Ticketing system
– Version control
– Build system
– Continuous Integration System
– Test systems (applikation servers and database servers)

Our ticketing and version control system is under control by our operations department and works pretty much all the time. In case of problems we simply raise an issue and it is addressed.

Our build system is highly integrated into the applications and frameworks we build and is based on the platform we deploy to. So this is under version control and works quite well. At some point we experienced a lot of issues when Apple cut the ties to OpenSSL, which we solved by introducing Docker for the developers and that work quite well – but more on that later.

For CI, we also have a pretty stable setup. Once in a while our builds break. Often we can blame the culprit, based on the commit that triggered the build. Sometimes our test environments fail or a timed CI job fails, there are many reasons for this and often it is based on changes to example data or that more than one team member is working on the same dataset. There are multiple ways to overcome this and there is probably a nifty technical solution we can apply – well we have one issue where a download of an external component can render our Jenkins setup unresponsive, which still confuses me as to why this is even possible, but we have only observed it a handful of times.

All in all we have a well functioning team, development proces and infrastructure.

But I do observe the following problems:

– When stuff breaks out of the blue, it tends to fall between chairs
– Non-product code maintenance seems to be of nobody’s interest
– Maintenance of the shared databases are nobody’s responsibility
– Long term strategic ownership of the development platform is of nobody’s interest

I love building software and I love the whole SDLC and I often end up addressing the stuff listed above – which is bad because it is not in my job description, but I am simply the most senior on the team and it is my cryptonite, in that sense that I simply cannot leave it alone when it is not working, perhaps I should get professional help.

Which brings me back to Docker – which is a good example.

Docker saved us in a situation, where maintenance of development environments was consuming a lot of hours due to problems with our stack. Apple had cut the ties to OpenSSL, so all of our local development environments were experiencing issues. Luckily we deploy to Linux, so we weren’t experiencing issues there.

So I introduced Docker and we could get moving, it required a lot of experimentation, we learned a lot, but we got is stabilised and we have things under control.

This made me think that Docker could play a more prominent role – yes, I think long term about the development platform and deployment platform, simply because I care.

– What if we could skip the whole deployment packaging strategy we have today and simply use Docker images for deployment?

It is hard to make such a push ahead when it is not in your job description, it requires time and effort, way beyond introducing Docker in the development team, because it involves managers and other departments.

– What if it didn’t?

If you look at the team rooster and some of the problems, perhaps an operational profile in the team could do the trick, if the team was expanded with a devops role.

Historically we have a tight separation between operations and development due to IT-policy and standardisation emphasising this.

Currently our release proces is quite cumbersome and involves a certain amount of red tape, yes it has improved, but it is far from you one-click deployment and even continuous deployment setups you hear about.

– What if we could deploy Docker images directly into production?

This would most certainly be a boost to the feedback loop and therefor the team and then the productivity.

– But is it a good idea? – teaming up with roles with specialties in certain areas are a good idea, we can see from our current team what specialists can contribute with, but shouldn’t we aim for a better integration with our existing operations team?

If you look at the team we have no support function, our organisation only has a first level support, so we act as 2nd. or 3rd. level depending on how you depict it. What we did here was quite interesting, we made the support role go on tour in the team between the developers. This has increased the knowledge sharing and responsibility. I cannot take credit for this our current team manager introduced this, but I do wish I had introduced this years ago…

So perhaps the right solution to addressing the problems we face are applying the same strategy for the operational responsibility, I think this is called devops. So the team can handle everything by itself – problem solved!

I am still pondering about my role as a product manager, how far should I tage it. I see a lot of roles becoming abstract and facilitating roles – and I am simply not that type. It does not work for me, perhaps it is my techie background, but I need to have a more low-level understanding. I act as sort of an architect for parts of our systems and one thing I have picked up is, architects who do not code, loose the coupling to thing they are architecting. You should not be a code contributing team-member, but prototypes, examples etc. can be coding without interfering with the day to day software development cycle.

I am still trying to find my area as a product manager and I might come back to this in a future post. For now I am trying to get the team to find out how to overcome our obstacles…

Team-octopus – an anti pattern?

Release of Crypt::OpenSSL::X509 1.8.9

I have just released Crypt::OpenSSL::X509 1.8.9. Do note that this is not originally my distribution, but I have helped the author Dan Sully out a little since I am a user of his Crypt::OpenSSL::PKCS12 and Crypt::OpenSSL::X509 and I have an interest in the distributions continued existence, availability and functionality.

So this blog post is more a description of the proces of getting involved, using my involvement in Crypt::OpenSSL::X509 and it’s cousin Crypt::OpenSSL::X509 as examples.

I started out by making a few PRs for some issues we were experiencing, I slowly got involved as Dan not really maintaining the distributions so actively, not doing Perl and working on other stuff – all completely acceptable reasons. Dan started with giving me co-maintainership on PAUSE/CPAN, so I could upload releases. First release I made was simply from a fork, merging a PR post-release, not the best strategy, but it worked out.

Now I have commit privileges on both repositories and on PAUSE/CPAN I have co-maintainership so I can both implement and upload releases. Given this privilege is most certainly daunting and I am faced with a number of questions, some are easy to answer some are more difficult, some will not be answers at this time – anyway questions pop-up in your head:

  1. How much can I change?
    • Style?
    • Toolchain?
    • Functionality?
  2. Should I fix all the issues?
  3. Do I understand all aspects of the implementation?
  4. What if I cannot contribute?

Many answers will present themselves as you start to get more and more familiar with the project in question and other parts, over time, as you get more and more hands on. Currently I consider myself an apprentice in this context, everything is new, confusing and you are afraid to break something.
Modern software development is very forgiving, we have:

– version control and branching strategies
– continuous integration and unit-test suites
– collaboration platforms and open source
– and of course Google and StackOverflow

So it is very easy to get back to the original state, get feedback from either humans or machines or get help or find examples, which resemble what you are trying to accomplish.

Some of the PRs I had created enabled Travis integration for continuous integration, this was a contribution I could make without influencing the actual code – and easy one so to speak. Other PRs addressed issues the build tools. Both distributions are based on Module::Install, where all of my own distributions are based on Dist::Zilla, but for now it seems like at good idea to stick with what is already working, no need to change stuff just for the sake of change.

For coding style, I think it is a good idea to stick to the existing coding style of the project. When and if the project evolve even further, perhaps even on boarding more contributors or if PRs are getting difficult to review or understand it will perhaps be time to document a coding style or enforce a coding style.

Which brings me to the next point. Both Crypt::OpenSSL::X509 and Crypt::OpenSSL::PKCS12 are Perl implementations on top of a C-based library. For me this is a marvellous change to get to read some C-code, when reviewing PRs or familiarising myself with the project codebase.

Familiarising yourself with the existing codebase, can be also be accomplished by triaging bugs, the current bug count for the two project looks as follows;

– GitHub: Crypt::OpenSSL::X509 (17 issues)
– GitHub: Crypt::OpenSSL::PKCS12 (2 issues)

So there should be something to get me started.

In my opinion you do not have to fix all bugs, but it is a good way to dig in and learn a lot. Do not be hesitant to contact the bug reporter if you have questions, they might be long time users and have extensive knowledge of the projects inner workings. The same goes for contributors, which might even know even more since they have actually made a change and are requesting a merge.

What got me to release Crypt::OpenSSL::X509 1.8.9 was actually a PR, which I reviewed, it was in part of the code where I have proposed changes myself, so I would say I had an understanding of what was going on. The change however targeted an operating system, with which I am not familiar – so I wrote the contributor and asked, when there was something that was not clear to be. I got a marvellous response, point to some good documentation, so I learned something and I could complete my review.

Another strategy you can apply, or get anxious to start hacking away, is to add tests. Check the test coverage and implement more tests in the weak spots, that is also a good way to get into the functionality and composition of the project.

My advice is to just get started, review, read, code, learn, test… I do consider all that apprentice level, when you make your first release with a feature of your own or by request of some other, you are no longer an apprentice – you are a true contributor – and that is worth aiming for.

Good luck with your endeavours, there are plenty of projects to contribute to and there is nothing wrong with being an apprentice, all masters were apprentices once.

Release of Crypt::OpenSSL::X509 1.8.9

Date::Holidays 1.10 released

Release 1.08 of Date::Holidays had some issues with the test suite, which resulted in numerous failure reports from CPAN-testers, please see issue #21 for details.

This resulted in release 1.09, which addressed the problem with the bad tests. At the same time it however demonstrated issues with the integration towards Date::Holidays::NZ and Date::Holidays::SK, so issues #22 and #23 was created respectively.

Issue #22 has now been addressed in release 1.10 and next up is 1.11, which is planned to address issue #23 unless something else comes up.

The adaptation of Date::Holidays::NZ also supports the regional parameter described in: Date::Holidays::NZ.

So checking if New Years Day is a holiday in New Zealand, via Date::Holidays:

use Date::Holidays;

my $dh = Date::Holidays->new( countrycode => 'nz' );
if ($dh->is_holiday(year => 2018, month => 1, day => 1) {
        print “It is\n”;
}

And in particular for the region of Auckland (see Date::Holidays::NZ for details).

use Date::Holidays;

my $dh = Date::Holidays->new( countrycode => 'nz' );
if ($dh->is_holiday(year => 2018, month => 1, day => 1, region => 2) {
        print “In Auckland it is\n”;
}

You can also get a list of holidays:

use Date::Holidays;
use Data::Dumper;

my $dh = Date::Holidays->new( countrycode => 'nz' );

my $holidays_hashref = $dh->holidays(year => 2018);
print STDERR Dumper $holidays_hashref;

$VAR1 = {
    '0206' => 'Waitangi Day',
    '0402' => 'Easter Monday',
    '0102' => 'Day after New Years Day',
    '1022' => 'Labour Day',
    '1226' => 'Boxing Day',
    '1225' => 'Christmas Day',
    '0330' => 'Good Friday',
    '0425' => 'ANZAC Day',
    '0604' => 'Queens Birthday',
    '0101' => 'New Years Day'
};

And based on region:

use Date::Holidays;
use Data::Dumper;

my $dh = Date::Holidays->new( countrycode => 'nz' );

my $holidays_hashref = $dh->holidays(year => 2018, region => 2);
print STDERR Dumper $holidays_hashref;

$VAR1 = {
    '0129' => 'Auckland Anniversary Day',
    '1022' => 'Labour Day',
    '0101' => 'New Years Day',
    '0402' => 'Easter Monday',
    '1225' => 'Christmas Day',
    '0330' => 'Good Friday',
    '1226' => 'Boxing Day',
    '0102' => 'Day after New Years Day',
    '0425' => 'ANZAC Day',
    '0206' => 'Waitangi Day',
    '0604' => 'Queens Birthday'
};

Feedback most welcome,

jonasbn

Date::Holidays 1.10 released