Terms and Conditions as a Service – literally

Some time ago I changed my title to Product Manager. For many years I have worked as a developer and later team-lead for a development team, so this was an interesting change.

Working as a team-lead had slowly removed me from actual day to day coding, doing more and more human-resources related tasks and meetings. So when it was suggested to me to play a more active role in the software development, without the team responsibilities I accepted. The only requirement, which was presented to me in my new role was:

Use your knowledge and know-how to continuously support our software services and products.

I was a bit uneasy with the new role and perhaps mostly the title. Having worked as a developer for a long time, it was hard to loose the techie. I suggested “Technical Product Manager”, but it was denied – I got over it and at that point it really did not matter – after all it was just a title *1

Still fearing that it would move me away from coding, I decided to try to shape my new role to suit me better. The organisation I work for has never had a Product Manager before, so I figured I might as well try to outline my own role.

I started out by examining an idea I had played with for some time, but had not implemented. As a Product Manager I decided it was totally legal to create prototypes to evaluate possible candidates for our service portfolio.

The idea was to handle the problem area of “Terms and Conditions” and communication of these. The problem area can be described in the following way:

  1. The terms and conditions has to be available in a preservable format (I am not a legal specialist, so I do not know the exact wording, but this is the way it was explained to me)
  2. The terms and conditions have to be available to the end-user in the revision/version, originally presented to the user

In addition, the following, more basic, requirements followed:

  1. We want to be able to link to the current terms and conditions, so you can find them for example via a website
  2. We want to be able to link to specific revisions so we can create links for websites
  3. We want to be able to communicate the terms and conditions via email, without sending the complete terms and conditions, but just providing a link
  4. We want to support both danish and english

I boiled together a prototype service to handle exactly these requirements and the prototype can be found on GitHub and on DockerHub.

The solution offers the following:

– Terms and Conditions can be downloaded as a PDF and this has been accepted as a preservable format
– You can link to an exact revision, for building lists for example
– You can link with a date parameter, which will give you the revision relevant for the given date
– You can link to the service and get the current revision of the Terms and Conditions
– You can point to a given translation of the document in the URL by using the language indication ‘da’ for Danish and ‘en’ for English

Lets walk through it:

– Providing PDF files as an asset it pretty easy in any modern web development framework

– The date based query:
/en/terms_and_conditions/20020611

Returns terms and conditions active for the specified date. This can be used in email for example where you then can stamp with the current date.

– The revision based query:
/en/terms_and_conditions/revision/2

Returns current terms and conditions revision 2. This can be used for enumerations and listings or specific deep links.

– The basic query:
/en/terms_and_conditions

Returns current terms and conditions, which can be used for webpages where you want to show the current revision for evaluation by the requester.

– The basic query, supporting another language:
/da/terms_and_conditions

Returns current terms and conditions in danish, can be changed to English by specifying: en instead of da.

All of the available documents are assets to the service, these could be fetched from a database or similar, in the prototype they are just publicly available files.

The prototype solves the problems outlined and gives us an overview of the public facing part, meaning the service and feature parts, but can be improved in many areas, especially in regard to the internals.

– You could annotate the documents, if they are no longer the current revision. My suggestion is to annotate the actual PDF, alternatively the presentation in the browser could take care of this. The current prototype does not handle this.

– Handling the problem of different timezones can be quite complex, my recommendation is to decide on one timezone being the authoritative timezone

– The algorithm for resolution could be optimised

– The representation of the terms and conditions artefacts in the back-end could be moved to a databased

– The date parameter is a weak point and the parameter handling could also be improved, at the same time, we expect to label the URL, resulting in a query, with a dateformat we already know

The prototype even holds a bonus feature, due to the way the central algorithm works, you can actually add an asset in advance. It will not be served as the current revision of the terms and conditions until it’s startdate is passed. This means that nobody has to work on New Year’s Eve to publish the new revision of the terms and conditions for availability on the 1st. of January.

These can of course be retrieved based on the revision. Handling of this could be implemented, but I actually consider this as a good thing, since it means that you can test the application, without jumping too many hoops.

I have never worked much with prototypes on a larger scale before, but using my boring stack, it was actually quite fast to get something to work, it would shed light on interesting aspects of the UX and the internal implementation, like the main algorithm and finally it provided a proof of concept, which could spark new ideas.

Becoming a product manager is hard, but it does not necessarily mean that you have to be removed from coding. Prototyping is a lot of fun and it is most certainly not the last time I have approached a problem area in this way.

*1 titles changes can backfire and ever since I changed my title on Linkedin I have received a lot of Product Manager related stuff

Advertisements
Terms and Conditions as a Service – literally

Test::Timer 2.09

I have recently released the Perl distribution Test::Timer 2.09, the last release I blogged about was 2.00 – a lot has happened in regard to stabilisation. Attempts at making some minor improvements resulted in tests failing and a long road to get things stable again.

2.09 is a culmination of a lot of releases aiming at getting stability for the tests run by CPAN-testers. I think I have succeeded as you can read from the test reports, with 361 passes and 1 fail (at the time of writing)

So lets revisit the changes and releases:

2.01 2017-06-12 Bug fix release, update recommended

- Fixed bug where execution/time would be reported as 0 (#13)

A bug introduced in 2.00, this happens, see issue #13

2.02 2017-06-30 Maintenance release, update recommended

- Correction to documentation

- Improvements to alarm signal handling and other internal parts

- Addressed issue #15 meaning thresholds are now included in the assertions

Improvements to the test assertions, documentation and signal handling, see issue 15. This was based on a bug report from a user, so I was most happy to fix this. I do not think my distribution has many users, so I have to cater to the ones providing me with feedback and using my small open source contribution.

2.03 2017-07-01 Maintenance release, update not required

- Minor clean up in code and tests

minor clean up to code, removed a lot of the Perl versions from the Travis integration, it seems a bit overkill with so much testing and it takes a lot of time, so I decided on only: 5.10, 5.20, 5.22. and 5.24 – next step will be to exchange 5.22 and 5.25 for 5.26

2.04 2017-10-15 Maintenance release, update not required

- Minor improvements to Test::Timer::TimeoutException, some obsoleted code could
 be removed

- Generalising test asserting, since CPAN testers are sometime constrained on resources,
 making it impossible to predict the actual timeout value

Example: http://www.cpantesters.org/cpan/report/2561e32c-9efa-11e7-bc90-bbe42ddde1fb

- Correction of spelling mistake in PR #16 from Gregor Herrmann

Removed some more code, which was of no use to the actual implementation. I sometimes observe some failing tests with CPAN tests, which I suspect are due to high loads on the smoker machines, since I am not always able to reproduce the fails. I received a PR from a Debian maintainer, see issue #16. I can only say that I am happy to support other open source contributors putting in the effort and taking the time to distribute my work.

2.05 2017-11-12 Maintenance release, update not required

- Addressed issue #11 adding experimental graphical support elements to the documentation

Added some graphical assistance, I have for long pondered about this. You can see it in the documentation as ASCII or on the homepage for the distribution as actual images.

2.06 2017-11-14 Maintenance release, update not required

- Added cancellation of alarm, based on advice from Erik Johansen

- Implemented own sleep, based on select, this might address possible issues with
 sleep implementations

Still boxing the issue with constrained environments I mailed my local Perl user group and I talked to one of my colleagues about some of the issues I was observing. Apparently it is not easy to identify whether a system is under heavy load. My colleague did advice me to handle the alarm more appropriately, it sounded reasonable, it did not fix the issue, but it did feel more right to add this code. At the same time I implemented a my own sleep method, so I could easily exchange the implementation if need arose. Somebody hinted to me that the sleep function could be problematic on some operating systems, so I exchanged it for select.

2.07 2017-11-18 Maintenance release, update not required

- Addressing issue #17, the tests are now more liberal, so when executed
 on smokers, CI environments and similar, load will not influence the
 test results. The requirement for Test::Tester has been updated and a patch
 required by this distribution has been included

Out of desperation I decided to make the tests more liberal and yes it did give me more passes with CPAN-testers. This change did not feel right, but I know I could correct it again, but I needed to see the feedback from CPAN-testers, but I knew I was treating the diagnosis not the root cause of the problem, see issue #17. I am using Test::Tester and older but really nice module. In order to implement the changes I required I pushed a patch upstream and it got accepted, so at least I had some nice syntactic sugar for implementing the more liberal test assertions.

2.08 2017-11-20 Maintenance release, update not required

- Addressing reports on failing tests from CPAN testers

This release was even more steps in the wrong direction, ignoring the timeout test assertions by treating them as normal tests failures even though the situation would not be the same. When you implement unit-tests and you have the opportunity to be strict and make tight and correction assertions, do so. Never the less more passes.

2.09 2017-11-24 Maintenance release, update not required

- Attempting to address issues with tests on Windows

REF: http://www.cpantesters.org/distro/T/Test-Timer.html?grade=3&perlmat=2&patches=2&oncpan=2&distmat=2&perlver=ALL&osname=ALL&version=2.08

- Reinstated sleep over select in the test suite

- Changed some test parameters was made a bit less relaxed attempting to decrease the execution time
 for the test suite

- Removed loose match in regular, it should be possible to anticipate the timeout

- Removed redundant tests, trying to cut down execution time for the test suite

With release 2.09 I decided to make a real effort to kick the test suite back into shape. With focus and effort I was able to pull it through and 2.09 passes almost all tests but one. I exchanged select for sleep and it proved to be a good decision.

So now I am stuck with this test failure report (excerpt):

Output from 'C:\Strawberry240\perl\bin\perl.exe ./Build test':

t/00-compile.t ............ ok

# Failed test at t/_benchmark.t line 21.
 # Looks like you failed 1 test of 3.
 t/_benchmark.t ............
 Dubious, test returned 1 (wstat 256, 0x100)
 Failed 1/3 subtests
 t/author-critic.t ......... skipped: these tests are for testing by the author
 t/author-pod-coverage.t ... skipped: these tests are for testing by the author
 t/author-pod-syntax.t ..... skipped: these tests are for testing by the author
 t/release-cpan-changes.t .. skipped: these tests are for release candidate testing
 t/release-kwalitee.t ...... skipped: these tests are for release candidate testing
 t/release-meta-json.t ..... skipped: these tests are for release candidate testing

# Failed test 'subtest 'time_between, failing test' of 'Failing test of time_between' compare ok'
 # at t/test-tester.t line 54.
 # got: '1'
 # expected: '0'

# Failed test 'subtest 'time_between, failing test' of 'Failing test of time_between' compare diag'
 # at t/test-tester.t line 54.
 # ''
 # doesn't match '(?^:Test ran \d+ seconds and did not execute within specified interval 1 - 2 seconds)'
 # Looks like you failed 2 tests of 77.
 t/test-tester.t ...........
 Dubious, test returned 2 (wstat 512, 0x200)
 Failed 2/77 subtests
 t/time_alert.t ............ ok

Test Summary Report

 t/_benchmark.t (Wstat: 256 Tests: 3 Failed: 1)
 Failed test: 2
 Non-zero exit status: 1
 t/test-tester.t (Wstat: 512 Tests: 77 Failed: 2)
 Failed tests: 39, 42
 Non-zero exit status: 2
 Files=10, Tests=84, 31 wallclock secs ( 0.07 usr + 0.13 sys = 0.20 CPU)
 Result: FAIL
 Failed 2/10 test programs. 3/84 subtests failed.
 
 In the context of all of the other reports succeeding it does not make much sense and it fails in a place I have not observed a failure in before - perhaps a bad smoker, any how I need to investigate.

Until next timely release – take care

jonasbn

Test::Timer 2.09

Interacting with PAUSE using CLI

Interesting and most certainly word a try

perlancar's blog

Any CPAN author has to interact with PAUSE, the website you go to to upload files if you want to publish your work on CPAN. There is no API provided, so you have to use a browser to upload files manually.

Well, not really. There are some modules you can use, like CPAN::Uploader to upload files or WWW::PAUSE::CleanUpHomeDir to delete old releases in your PAUSE home directory. And if you use Dist::Zilla, by default you will use CPAN::Uploader when you release your distribution, so you don’t have to go to PAUSE manually. These modules all work by scraping the website since, like it is said above, there is no API.

WWW::PAUSE::Simple is another module you can use which: 1) provides more functions (aside from uploading, currently can also list/delete/undelete/reindex files, as well as list distributions and cleanup older releases, more functions will be added in the future); 2) comes…

View original post 789 more words

Interacting with PAUSE using CLI

Contributing to a new project – a bit like starting a new job

I have been using and creating open source software for a long time, I am however of the opinion that I never really have contributed anything of significance. Yes, bug reports, your occasional PR – are all important, but I have never ever contributed to anything where the project was high profile or it was a bigger project or system, with many contributors or an organisation behind it.

Recently I have been picking up from a lot of blog posts and podcasts that in order to evolve as a developer you have to get out of your comfort zone. I took the first step some time ago, when I decided to contribute to MarkdownTOC, a plugin for Sublime Text, where plugins are written in Python and my first contribution was the deletion of a single line. I do not program in Python, but I use Sublime Text and this particular issue, was scratching my own itch.

This was not much, but the positive impact was that the author actually welcomed my contribution and we started an ongoing collaboration. Since then I have contributed a lot more on the documentation side and currently I rank second in the number of lines contributed. Not that this is prestigious to me, but it does demonstrate that contributions even when not actual code are significant and are most appreciated.

At some point I fell over a tweet from EFF (The Electronic Frontier Foundation), indicating that their open source initiatives were looking for volunteers and contributors. After some consideration, I always do a lot of considering when about to leave my comfort zone, I decided to give it a go.

I can only speak for my self, but lets take a step back and reflect on comfort zone and open source and why contributing to open source is a comfort zone issue.

If we look at open source in general. You make something and you put it out there for other people to use or not use and it might be scrutinised or not. Luckily the amount of open source today is overwhelming, so you can actually open source your work and if people do not like it or do not want to use it, they pick another an alternative solution to the itch they need to scratch. This mean the scrutiny and feedback might not be as tough as it could be, I guess some open source authors work in areas where their contributions are being used and viewed by thousands of other people and scrutiny and feedback is different, the Linux kernel is an example.

I decided to have a look at the certbot project.

I do not program in Python, it is however an interpreted language and being a long time Perl programmer and based on my very limited knowledge on Python I did expect the two languages to have some familiarity.

After going over the issues labelled as “good first issue”, I decided on issue #4736. I commented on the issue, since I did not want to start working on an issue where somebody was already assigned or were progressing. I got a positive response and I was ready to get started.

Getting started required reading a lot of documentation on how to actually get started, how to contribute and what tools to use. Most open source projects are more than their source code. The have a lot of infrastructure integration and toolchain customisation, where some projects are “fork, hack, test, push”, you have to install additional tools and configure these.

I started by forking the project and got Sphinx up and running on my laptop.

$ pip install Sphinx
$ cd docs
$ make html
sphinx-build -b html -d _build/doctrees   . _build/html
 Running Sphinx v1.6.2

making output directory...

Exception occurred:

  File "conf.py", line 133, in <module>
     import sphinx_rtd_theme
 ImportError: No module named sphinx_rtd_theme
 The full traceback has been saved in /var/folders/4s/v4_4270j5ybb60t4kjwk_f080000gn/T/sphinx-err-AmhKOS.log, if you want to report the issue to the developers.

Please also report this if it was a user error, so that a better error message can be provided next time.
 A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
 make: *** [html] Error 1

First problem was an easy fix:

$ pip install sphinx_rtd_theme
$ make html
sphinx-build -b html -d _build/doctrees   . _build/htmlRunning Sphinx v1.6.2

Extension error:

Could not import extension repoze.sphinx.autointerface (exception: No module named repoze.sphinx.autointerface)
 make: *** [html] Error 1

Second problem yet another easy fix:

$ pip install repoze.sphinx.autointerface
$ make html 

Finally reaching a success I was able to get started on filling in the blanks.

I scanned the file structure and compared it to the documentation structure.

cert_manager.py
 cli.py
 eff.py
 error_handler.py
 hooks.py
 lock.py
 log.py
 main.py
 notify.py
 ocsp.py
 renewal.py

plugins/
 common_test.py
 disco_test.py
 dns_common_lexicon_test.py
 dns_common_test.py
 dns_test_common.py
 dns_test_common_lexicon.py
 manual_test.py
 null.py
 null_test.py
 selection.py
 selection_test.py
 standalone_test.py
 util_test.py
 webroot_test.py

So I added the missing documentation files. When re-generating the documentation, the following issues were observed:

certbot/cli.py:docstring of certbot.cli.HelpfulArgumentParser.add:7: WARNING: Inline emphasis start-string without end-string.
 certbot/cli.py:docstring of certbot.cli.HelpfulArgumentParser.add:8: WARNING: Inline strong start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler:6: WARNING: Inline emphasis start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler:6: WARNING: Inline strong start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler:6: WARNING: Inline emphasis start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler:6: WARNING: Inline strong start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler.register:1: WARNING: Inline emphasis start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler.register:1: WARNING: Inline strong start-string without end-string.

A minor nifty trick helped eliminating the warnings. Finally I was left with warnings from Sphinx indicating some files not being part of the overall document tree structure.

certbot/docs/challenges.rst:: WARNING: document isn't included in any toctree
 certbot/docs/ciphers.rst:: WARNING: document isn't included in any toctree
 certbot/docs/man/certbot.rst:: WARNING: document isn't included in any toctree

After this I sent my first PR for issue #4736 all of these where just technical issues, which could be solved by myself. The overall job is far from done. Next step is getting the documentation up to date, meaning the information used by Sphinx to generate the documentation also has to be aligned with the actual implementation and I have just started on this. This does require more knowledge on certbot and more reading up on Python. My notes on Python details are growing as I cover more and more ground and until now and I have learned about.

– inner classes
– naming conventions
– module use and inheritance
– implicit returns
– the None datatype

I have many questions on the actual certbot implementation, but I will ask these with each assignment/file as I was recommended to make a PR per updated file and my first PR is slowly shaping up.

Starting contributing to a larger project is hard work, it reminds me of starting a new job, as you are exposed to: new systems, new tools, new processes and new colleagues. Much of what you do is similar or you have experience from previously, but at the same time everything is different, so no matter what there is a learning curve.

People on the certbot project are friendly and most helpful, this does mean that the issue with the comfort zone is alleviated. At the same time, if you focus on what you can bring to the project in question, the stuff you come with, even if this is just man hours, you cannot fail.

If however all of your PRs are declined, if all your questions are met with silence or all your inquiries are met with obnoxious responses – instead of feeling discomfort, find another project. There are plenty of other open source projects, which will welcome your efforts. And no matter what happens, you will have learned, you will have evolved – and you comfort zone will have grown. No need to be hindered by the comfort zone feeling, get out, there start small, contribute and evolve.

Contributing to a new project – a bit like starting a new job

Hacktoberfest 2017

Hacktoberfest 2017 is over.

This is the second year I participate. The event unfortunately collided with two conferences and a serious deadline at work, so I was not able to contribute as much as I would have liked to. I know this is only my second year, but it seems to be an emerging pattern, since I always seems incredibly busy around this time of year.

Anyway here is a list of my contributions.

Patch to Crypt::OpenSSL::PKCS12. We use this component at work. I did not expect this to count, but I created a PR in October, so it counted – yay! The Distribution author has not yet made a release, but I will contact him shortly to see if I can help getting this pushed out

Evaluating another component we use at work Class::Accessor, I found out this distribution had a small handful of issues. I went over these and decided to give it a shot. I contacted the author via the regular channels, which resulted in a bounced email. Luckily I know the author via twitter and we have common friends, so I got a working email address. After getting an accept I lifted all the proposed patches into GitHub PRs and addressed most of the issues, since all of them were minor also as PRs. This resulted in the first release in 8 years.

GitHub made some tweets about their Github Explore and much to my disappointment Perl was not listed as a featured topic, it was not defined as a topic. I decided to give it a go and after much investigation on what logo to use I could send a PR to the project.

Of the projects I had lined up, where I wanted to contribute but could not find the time I can mention:

– I would love to contribute some more to certbot, but I could not find the time, I will blog more on this later
– The Perl distribution Business::Tax::VAT::Validation, which we also use at work, I think the documentation could do with a brush up. I have talked to the author and he is okay with this, I just need to find the time

And then there is all my own stuff.

Hacktoberfest is great, since you are enticed to do some more open source, which mean you might get exposed to other projects and perhaps even technologies.

I will be contributing to open source continuously and I hope to be able to participate in Hacktoberfest in 2018.

Hacktoberfest 2017

DockerCon Europe 2017

I have just attended my first ever DockerCon, I was so lucky, the conference was taking place in my hometown – Copenhagen.

It was quite awesome, I have recently attended GOTO Copenhagen at the same venue, but DockerCon was a lot bigger, with more many tracks, sessions, exhibitors and of course attendees. I have attended tool focused tech conferences before, but primarily conferences, but this reminded me of OSCON.

About attendees DockerCon did something very cool. By facilitating a hallway track, where you could either invite other users or see what other users wanted to talk about and then make contact. This put me in contact with some other developers and we could exchange experiences and war stories.

The sunday before the conference I attended a hackathon organised by the local Docker User Group and one of the exhibitors (Crate.io), so I actually got to meet some of the other attendees in advance. So for the first hallway track talk I attended, I met a familiar face. Later on I met complete strangers, but it was really interesting to just meet and talk about software development and Docker.

The overall focus of the conference was very much on the operational part, integration of legacy Windows and Java apps and orchestration systems like Kubernetes, Mesos, Swarm etc.

I still feel a bit like a Docker n00b, but attending a talk by @abbyfuller showed me that I at least am getting much of the image construction right, still picked up a lot of good information and it is always good to attend conference to get your knowledge consolidated and debugged.

Another very good talk by @adrianmouat was entitled: “Tips and Tricks of the Captains”, this presentation was jam-packed with good advice and small hacks to make your day to day work with Docker more streamlined. Do check out the linked slides.

I attended a lot of talks and I got a lot of information, it will take me some time to get the notes clarified and translated into actionable items, I can however mention:

– freezing of contains for debugging
– multi stage builds
– improved security for running containers (user id setting) and use of tmpfs for mount points
– The scratch image

In addition to the talks I visited a lot of exhibitors. I made a plan of exhibitors to visit based on our current platform at work. My conclusion is that Docker is there to stay and the integrations being offered are truly leveraging container technology making it more and more interesting to evaluate in context of using Docker in production. Currently we only use it for production, next step to evaluate is test and QA.

Many of the companies making Docker integrations even offer their projects as open source, such as Crate.io with Cratedb and conjur from CyberArk – I had never heard of these companies before. Crate.io sponsored the sunday hackathon and has a very interesting database product. CyberArk’s conjur is aimed at secret sharing, an issue many of us face.

Apart from the list above and the interesting products (not only open source). The whole conference spun off a lot of ideas for other things I need to investigate, implement, evaluate and try out:

– Debugging containers (I have seen this done in the keynote from DockerCon 2016
– Docker integration with Jenkins for CI, there is a plugin of sorts

I plan to follow up on this blog post with some more posts on Docker, the motto of the conference something about learning and sharing – that was most certainly also practiced, so I decided I will give my two cents over the following months.

DockerCon Europe 2017