Test::Timer 2.09

I have recently released the Perl distribution Test::Timer 2.09, the last release I blogged about was 2.00 – a lot has happened in regard to stabilisation. Attempts at making some minor improvements resulted in tests failing and a long road to get things stable again.

2.09 is a culmination of a lot of releases aiming at getting stability for the tests run by CPAN-testers. I think I have succeeded as you can read from the test reports, with 361 passes and 1 fail (at the time of writing)

So lets revisit the changes and releases:

2.01 2017-06-12 Bug fix release, update recommended

- Fixed bug where execution/time would be reported as 0 (#13)

A bug introduced in 2.00, this happens, see issue #13

2.02 2017-06-30 Maintenance release, update recommended

- Correction to documentation

- Improvements to alarm signal handling and other internal parts

- Addressed issue #15 meaning thresholds are now included in the assertions

Improvements to the test assertions, documentation and signal handling, see issue 15. This was based on a bug report from a user, so I was most happy to fix this. I do not think my distribution has many users, so I have to cater to the ones providing me with feedback and using my small open source contribution.

2.03 2017-07-01 Maintenance release, update not required

- Minor clean up in code and tests

minor clean up to code, removed a lot of the Perl versions from the Travis integration, it seems a bit overkill with so much testing and it takes a lot of time, so I decided on only: 5.10, 5.20, 5.22. and 5.24 – next step will be to exchange 5.22 and 5.25 for 5.26

2.04 2017-10-15 Maintenance release, update not required

- Minor improvements to Test::Timer::TimeoutException, some obsoleted code could
 be removed

- Generalising test asserting, since CPAN testers are sometime constrained on resources,
 making it impossible to predict the actual timeout value

Example: http://www.cpantesters.org/cpan/report/2561e32c-9efa-11e7-bc90-bbe42ddde1fb

- Correction of spelling mistake in PR #16 from Gregor Herrmann

Removed some more code, which was of no use to the actual implementation. I sometimes observe some failing tests with CPAN tests, which I suspect are due to high loads on the smoker machines, since I am not always able to reproduce the fails. I received a PR from a Debian maintainer, see issue #16. I can only say that I am happy to support other open source contributors putting in the effort and taking the time to distribute my work.

2.05 2017-11-12 Maintenance release, update not required

- Addressed issue #11 adding experimental graphical support elements to the documentation

Added some graphical assistance, I have for long pondered about this. You can see it in the documentation as ASCII or on the homepage for the distribution as actual images.

2.06 2017-11-14 Maintenance release, update not required

- Added cancellation of alarm, based on advice from Erik Johansen

- Implemented own sleep, based on select, this might address possible issues with
 sleep implementations

Still boxing the issue with constrained environments I mailed my local Perl user group and I talked to one of my colleagues about some of the issues I was observing. Apparently it is not easy to identify whether a system is under heavy load. My colleague did advice me to handle the alarm more appropriately, it sounded reasonable, it did not fix the issue, but it did feel more right to add this code. At the same time I implemented a my own sleep method, so I could easily exchange the implementation if need arose. Somebody hinted to me that the sleep function could be problematic on some operating systems, so I exchanged it for select.

2.07 2017-11-18 Maintenance release, update not required

- Addressing issue #17, the tests are now more liberal, so when executed
 on smokers, CI environments and similar, load will not influence the
 test results. The requirement for Test::Tester has been updated and a patch
 required by this distribution has been included

Out of desperation I decided to make the tests more liberal and yes it did give me more passes with CPAN-testers. This change did not feel right, but I know I could correct it again, but I needed to see the feedback from CPAN-testers, but I knew I was treating the diagnosis not the root cause of the problem, see issue #17. I am using Test::Tester and older but really nice module. In order to implement the changes I required I pushed a patch upstream and it got accepted, so at least I had some nice syntactic sugar for implementing the more liberal test assertions.

2.08 2017-11-20 Maintenance release, update not required

- Addressing reports on failing tests from CPAN testers

This release was even more steps in the wrong direction, ignoring the timeout test assertions by treating them as normal tests failures even though the situation would not be the same. When you implement unit-tests and you have the opportunity to be strict and make tight and correction assertions, do so. Never the less more passes.

2.09 2017-11-24 Maintenance release, update not required

- Attempting to address issues with tests on Windows

REF: http://www.cpantesters.org/distro/T/Test-Timer.html?grade=3&perlmat=2&patches=2&oncpan=2&distmat=2&perlver=ALL&osname=ALL&version=2.08

- Reinstated sleep over select in the test suite

- Changed some test parameters was made a bit less relaxed attempting to decrease the execution time
 for the test suite

- Removed loose match in regular, it should be possible to anticipate the timeout

- Removed redundant tests, trying to cut down execution time for the test suite

With release 2.09 I decided to make a real effort to kick the test suite back into shape. With focus and effort I was able to pull it through and 2.09 passes almost all tests but one. I exchanged select for sleep and it proved to be a good decision.

So now I am stuck with this test failure report (excerpt):

Output from 'C:\Strawberry240\perl\bin\perl.exe ./Build test':

t/00-compile.t ............ ok

# Failed test at t/_benchmark.t line 21.
 # Looks like you failed 1 test of 3.
 t/_benchmark.t ............
 Dubious, test returned 1 (wstat 256, 0x100)
 Failed 1/3 subtests
 t/author-critic.t ......... skipped: these tests are for testing by the author
 t/author-pod-coverage.t ... skipped: these tests are for testing by the author
 t/author-pod-syntax.t ..... skipped: these tests are for testing by the author
 t/release-cpan-changes.t .. skipped: these tests are for release candidate testing
 t/release-kwalitee.t ...... skipped: these tests are for release candidate testing
 t/release-meta-json.t ..... skipped: these tests are for release candidate testing

# Failed test 'subtest 'time_between, failing test' of 'Failing test of time_between' compare ok'
 # at t/test-tester.t line 54.
 # got: '1'
 # expected: '0'

# Failed test 'subtest 'time_between, failing test' of 'Failing test of time_between' compare diag'
 # at t/test-tester.t line 54.
 # ''
 # doesn't match '(?^:Test ran \d+ seconds and did not execute within specified interval 1 - 2 seconds)'
 # Looks like you failed 2 tests of 77.
 t/test-tester.t ...........
 Dubious, test returned 2 (wstat 512, 0x200)
 Failed 2/77 subtests
 t/time_alert.t ............ ok

Test Summary Report

 t/_benchmark.t (Wstat: 256 Tests: 3 Failed: 1)
 Failed test: 2
 Non-zero exit status: 1
 t/test-tester.t (Wstat: 512 Tests: 77 Failed: 2)
 Failed tests: 39, 42
 Non-zero exit status: 2
 Files=10, Tests=84, 31 wallclock secs ( 0.07 usr + 0.13 sys = 0.20 CPU)
 Result: FAIL
 Failed 2/10 test programs. 3/84 subtests failed.
 In the context of all of the other reports succeeding it does not make much sense and it fails in a place I have not observed a failure in before - perhaps a bad smoker, any how I need to investigate.

Until next timely release – take care


Test::Timer 2.09

Interacting with PAUSE using CLI

Interesting and most certainly word a try

perlancar's blog

Any CPAN author has to interact with PAUSE, the website you go to to upload files if you want to publish your work on CPAN. There is no API provided, so you have to use a browser to upload files manually.

Well, not really. There are some modules you can use, like CPAN::Uploader to upload files or WWW::PAUSE::CleanUpHomeDir to delete old releases in your PAUSE home directory. And if you use Dist::Zilla, by default you will use CPAN::Uploader when you release your distribution, so you don’t have to go to PAUSE manually. These modules all work by scraping the website since, like it is said above, there is no API.

WWW::PAUSE::Simple is another module you can use which: 1) provides more functions (aside from uploading, currently can also list/delete/undelete/reindex files, as well as list distributions and cleanup older releases, more functions will be added in the future); 2) comes…

View original post 789 more words

Interacting with PAUSE using CLI

Contributing to a new project – a bit like starting a new job

I have been using and creating open source software for a long time, I am however of the opinion that I never really have contributed anything of significance. Yes, bug reports, your occasional PR – are all important, but I have never ever contributed to anything where the project was high profile or it was a bigger project or system, with many contributors or an organisation behind it.

Recently I have been picking up from a lot of blog posts and podcasts that in order to evolve as a developer you have to get out of your comfort zone. I took the first step some time ago, when I decided to contribute to MarkdownTOC, a plugin for Sublime Text, where plugins are written in Python and my first contribution was the deletion of a single line. I do not program in Python, but I use Sublime Text and this particular issue, was scratching my own itch.

This was not much, but the positive impact was that the author actually welcomed my contribution and we started an ongoing collaboration. Since then I have contributed a lot more on the documentation side and currently I rank second in the number of lines contributed. Not that this is prestigious to me, but it does demonstrate that contributions even when not actual code are significant and are most appreciated.

At some point I fell over a tweet from EFF (The Electronic Frontier Foundation), indicating that their open source initiatives were looking for volunteers and contributors. After some consideration, I always do a lot of considering when about to leave my comfort zone, I decided to give it a go.

I can only speak for my self, but lets take a step back and reflect on comfort zone and open source and why contributing to open source is a comfort zone issue.

If we look at open source in general. You make something and you put it out there for other people to use or not use and it might be scrutinised or not. Luckily the amount of open source today is overwhelming, so you can actually open source your work and if people do not like it or do not want to use it, they pick another an alternative solution to the itch they need to scratch. This mean the scrutiny and feedback might not be as tough as it could be, I guess some open source authors work in areas where their contributions are being used and viewed by thousands of other people and scrutiny and feedback is different, the Linux kernel is an example.

I decided to have a look at the certbot project.

I do not program in Python, it is however an interpreted language and being a long time Perl programmer and based on my very limited knowledge on Python I did expect the two languages to have some familiarity.

After going over the issues labelled as “good first issue”, I decided on issue #4736. I commented on the issue, since I did not want to start working on an issue where somebody was already assigned or were progressing. I got a positive response and I was ready to get started.

Getting started required reading a lot of documentation on how to actually get started, how to contribute and what tools to use. Most open source projects are more than their source code. The have a lot of infrastructure integration and toolchain customisation, where some projects are “fork, hack, test, push”, you have to install additional tools and configure these.

I started by forking the project and got Sphinx up and running on my laptop.

$ pip install Sphinx
$ cd docs
$ make html
sphinx-build -b html -d _build/doctrees   . _build/html
 Running Sphinx v1.6.2

making output directory...

Exception occurred:

  File "conf.py", line 133, in <module>
     import sphinx_rtd_theme
 ImportError: No module named sphinx_rtd_theme
 The full traceback has been saved in /var/folders/4s/v4_4270j5ybb60t4kjwk_f080000gn/T/sphinx-err-AmhKOS.log, if you want to report the issue to the developers.

Please also report this if it was a user error, so that a better error message can be provided next time.
 A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
 make: *** [html] Error 1

First problem was an easy fix:

$ pip install sphinx_rtd_theme
$ make html
sphinx-build -b html -d _build/doctrees   . _build/htmlRunning Sphinx v1.6.2

Extension error:

Could not import extension repoze.sphinx.autointerface (exception: No module named repoze.sphinx.autointerface)
 make: *** [html] Error 1

Second problem yet another easy fix:

$ pip install repoze.sphinx.autointerface
$ make html 

Finally reaching a success I was able to get started on filling in the blanks.

I scanned the file structure and compared it to the documentation structure.



So I added the missing documentation files. When re-generating the documentation, the following issues were observed:

certbot/cli.py:docstring of certbot.cli.HelpfulArgumentParser.add:7: WARNING: Inline emphasis start-string without end-string.
 certbot/cli.py:docstring of certbot.cli.HelpfulArgumentParser.add:8: WARNING: Inline strong start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler:6: WARNING: Inline emphasis start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler:6: WARNING: Inline strong start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler:6: WARNING: Inline emphasis start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler:6: WARNING: Inline strong start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler.register:1: WARNING: Inline emphasis start-string without end-string.
 certbot/error_handler.py:docstring of certbot.error_handler.ErrorHandler.register:1: WARNING: Inline strong start-string without end-string.

A minor nifty trick helped eliminating the warnings. Finally I was left with warnings from Sphinx indicating some files not being part of the overall document tree structure.

certbot/docs/challenges.rst:: WARNING: document isn't included in any toctree
 certbot/docs/ciphers.rst:: WARNING: document isn't included in any toctree
 certbot/docs/man/certbot.rst:: WARNING: document isn't included in any toctree

After this I sent my first PR for issue #4736 all of these where just technical issues, which could be solved by myself. The overall job is far from done. Next step is getting the documentation up to date, meaning the information used by Sphinx to generate the documentation also has to be aligned with the actual implementation and I have just started on this. This does require more knowledge on certbot and more reading up on Python. My notes on Python details are growing as I cover more and more ground and until now and I have learned about.

– inner classes
– naming conventions
– module use and inheritance
– implicit returns
– the None datatype

I have many questions on the actual certbot implementation, but I will ask these with each assignment/file as I was recommended to make a PR per updated file and my first PR is slowly shaping up.

Starting contributing to a larger project is hard work, it reminds me of starting a new job, as you are exposed to: new systems, new tools, new processes and new colleagues. Much of what you do is similar or you have experience from previously, but at the same time everything is different, so no matter what there is a learning curve.

People on the certbot project are friendly and most helpful, this does mean that the issue with the comfort zone is alleviated. At the same time, if you focus on what you can bring to the project in question, the stuff you come with, even if this is just man hours, you cannot fail.

If however all of your PRs are declined, if all your questions are met with silence or all your inquiries are met with obnoxious responses – instead of feeling discomfort, find another project. There are plenty of other open source projects, which will welcome your efforts. And no matter what happens, you will have learned, you will have evolved – and you comfort zone will have grown. No need to be hindered by the comfort zone feeling, get out, there start small, contribute and evolve.

Contributing to a new project – a bit like starting a new job

Hacktoberfest 2017

Hacktoberfest 2017 is over.

This is the second year I participate. The event unfortunately collided with two conferences and a serious deadline at work, so I was not able to contribute as much as I would have liked to. I know this is only my second year, but it seems to be an emerging pattern, since I always seems incredibly busy around this time of year.

Anyway here is a list of my contributions.

Patch to Crypt::OpenSSL::PKCS12. We use this component at work. I did not expect this to count, but I created a PR in October, so it counted – yay! The Distribution author has not yet made a release, but I will contact him shortly to see if I can help getting this pushed out

Evaluating another component we use at work Class::Accessor, I found out this distribution had a small handful of issues. I went over these and decided to give it a shot. I contacted the author via the regular channels, which resulted in a bounced email. Luckily I know the author via twitter and we have common friends, so I got a working email address. After getting an accept I lifted all the proposed patches into GitHub PRs and addressed most of the issues, since all of them were minor also as PRs. This resulted in the first release in 8 years.

GitHub made some tweets about their Github Explore and much to my disappointment Perl was not listed as a featured topic, it was not defined as a topic. I decided to give it a go and after much investigation on what logo to use I could send a PR to the project.

Of the projects I had lined up, where I wanted to contribute but could not find the time I can mention:

– I would love to contribute some more to certbot, but I could not find the time, I will blog more on this later
– The Perl distribution Business::Tax::VAT::Validation, which we also use at work, I think the documentation could do with a brush up. I have talked to the author and he is okay with this, I just need to find the time

And then there is all my own stuff.

Hacktoberfest is great, since you are enticed to do some more open source, which mean you might get exposed to other projects and perhaps even technologies.

I will be contributing to open source continuously and I hope to be able to participate in Hacktoberfest in 2018.

Hacktoberfest 2017

DockerCon Europe 2017

I have just attended my first ever DockerCon, I was so lucky, the conference was taking place in my hometown – Copenhagen.

It was quite awesome, I have recently attended GOTO Copenhagen at the same venue, but DockerCon was a lot bigger, with more many tracks, sessions, exhibitors and of course attendees. I have attended tool focused tech conferences before, but primarily conferences, but this reminded me of OSCON.

About attendees DockerCon did something very cool. By facilitating a hallway track, where you could either invite other users or see what other users wanted to talk about and then make contact. This put me in contact with some other developers and we could exchange experiences and war stories.

The sunday before the conference I attended a hackathon organised by the local Docker User Group and one of the exhibitors (Crate.io), so I actually got to meet some of the other attendees in advance. So for the first hallway track talk I attended, I met a familiar face. Later on I met complete strangers, but it was really interesting to just meet and talk about software development and Docker.

The overall focus of the conference was very much on the operational part, integration of legacy Windows and Java apps and orchestration systems like Kubernetes, Mesos, Swarm etc.

I still feel a bit like a Docker n00b, but attending a talk by @abbyfuller showed me that I at least am getting much of the image construction right, still picked up a lot of good information and it is always good to attend conference to get your knowledge consolidated and debugged.

Another very good talk by @adrianmouat was entitled: “Tips and Tricks of the Captains”, this presentation was jam-packed with good advice and small hacks to make your day to day work with Docker more streamlined. Do check out the linked slides.

I attended a lot of talks and I got a lot of information, it will take me some time to get the notes clarified and translated into actionable items, I can however mention:

– freezing of contains for debugging
– multi stage builds
– improved security for running containers (user id setting) and use of tmpfs for mount points
– The scratch image

In addition to the talks I visited a lot of exhibitors. I made a plan of exhibitors to visit based on our current platform at work. My conclusion is that Docker is there to stay and the integrations being offered are truly leveraging container technology making it more and more interesting to evaluate in context of using Docker in production. Currently we only use it for production, next step to evaluate is test and QA.

Many of the companies making Docker integrations even offer their projects as open source, such as Crate.io with Cratedb and conjur from CyberArk – I had never heard of these companies before. Crate.io sponsored the sunday hackathon and has a very interesting database product. CyberArk’s conjur is aimed at secret sharing, an issue many of us face.

Apart from the list above and the interesting products (not only open source). The whole conference spun off a lot of ideas for other things I need to investigate, implement, evaluate and try out:

– Debugging containers (I have seen this done in the keynote from DockerCon 2016
– Docker integration with Jenkins for CI, there is a plugin of sorts

I plan to follow up on this blog post with some more posts on Docker, the motto of the conference something about learning and sharing – that was most certainly also practiced, so I decided I will give my two cents over the following months.

DockerCon Europe 2017

SublimeText and EditorConfig and eclint

Following some of all the cool developers on twitter, GitHub, blogs etc. I fell over EditorConfig. The homepage of the project boldly stated:

EditorConfig helps developers define and maintain consistent coding styles between different editors and IDEs. The EditorConfig project consists of a file format for defining coding styles and a collection of text editor plugins that enable editors to read the file format and adhere to defined styles. EditorConfig files are easily readable and they work nicely with version control systems.

I primarily use perltidy for my Perl projects and I have used other pretty printers in the past, so I understood what it wanted to do, but it seemed so general it did not really bring any value, not being able to replace perltidy or similar, so I disregarded it as a fad.

Anyway EditorConfig kept popping up in the projects I was looking at so I decided to give it a second chance. I am not doing a lot of projects with a lot of different languages involved, but all projects does contain some source code, possibly some Markdown and some other files in common formats etc.

The formatting capabilities of EditorConfig are pretty basic, since it does not go into deep formatting details for all the languages out there, which would also be incredibly ambitious, but basic formatting like indentation size and format, encoding, EOL and EOF style. This seemed pretty useful for the files where I could not control format using perltidy, so it would be a welcome extension to my toolbox.

Luckily a prolific Github contributor Sindre Sorhus had implemented a plugin for SublimeText (my current editor of choice). So I installed the plugin and got it configured for some of my projects and started using it.

Apart from the editor part you simply place a configuration file in your project named: .editorconfig, configure it to handle the languages contained in you project and you are good to go.

The problem, well not really a problem, but common misunderstanding is that it reformats ALL your code. It does NOT. It only works on newly added lines. At first you might be disappointed, but just opening your editor with an active plugin should not mean that all your code has to be recommitted with extensive diffs confusing everybody (and yourself) – so this is actually a reasonable constraint.

Anyway at some point, you might want to reformat everything, to get a common baseline. Here eclint can help you, eclint is available on Github. eclint can both work as a linter, meaning it checks your adherence to the configuration (editorconfig) specified, but it can also apply it.


$ eclint check yourfile


$ eclint fix yourfile

EditorConfig can help you keep you own formatting consistent for some of the more esoteric file formats and when contributing to other peoples projects, you do not have to go back and forth over formatting issues, well you might, but the EditorConfig controllable parts will be aligned. Check the website and read up on integration with your editor of choice.

eclint can help you establish a formatting baseline for your own projects, but do read the documentation and do not mix it up with your regular development or yak-shaving, since you could face large diffs.

Happy formatting,


SublimeText and EditorConfig and eclint