Coffee Space


Listen:

Iris Messenger Reboot

Preview Image

Preview Image

This article has been on the backlog for quite some time and I dig it back out since reading about Scribus over on HackerNews. It got me thinking about Iris Messenger again and how I should reboot it.

Background

Back in 2017, from the 16th of April through to the 16th of September, I wrote a series of articles under the name of Iris Messenger. As I mentioned in the first article:

Welcome to the Iris Messenger, a free online newsletter aimed at tackling various pieces of interest regarding security, artificial intelligence, software and the greater world. The pieces themselves are aimed to be short, with a maximum of an A5 page being used allowing those with busy schedules to get a brief overview in short time, as easier to write content.

Essentially, it boiled down to collecting a series of links of interest and writing some small text around them. Te real beauty of it was the formatting and seemingly simple approach.

The name itself was derived from Greek mythology, as the messenger between the Gods (those in control) and humans (those without power). The logo, as seen in [1], represents an eye - symbolic of remaining watchful in a world that rewards ignorance. Releases are planned twice a month, the 1st and 16th of each month, meaning 24 issues a year (with volumes increased per year). This may change over time depending on commitment and popularity - the first few months can be considered a trial period.

Title from previous article

Title from previous article

Unfortunately, I only made it to 7 articles:

  1. Iris Messenger Volume 1, Issue 1
  2. Iris Messenger Volume 1, Issue 2
  3. Iris Messenger Volume 1, Issue 3
  4. Iris Messenger Volume 1, Issue 4
  5. Iris Messenger Volume 1, Issue 5
  6. Iris Messenger Volume 1, Issue 6
  7. Iris Messenger Volume 1, Issue 7

I believe there were a few reasons I only managed to complete this number of articles:

There were some other issues too that I never resolved, such as the URL shortening relying on Google. I thought it could always be done better, and I believe that time is likely now! What's changed?

New Ideas

I have a few new ideas to bring to the article:

Essentially, this will be just a series of things that I personally find interesting and want to share. Hopefully other people also find it interesting.

One thing I will need to do is figure out how to clearly section these ideas from one another and prevent it appearing as one large opinion piece.

Delivery

So next a quick discussion about the delivering of the content.

Timing

One thing I struggled with previously was the timing of the articles. I think one article every two weeks could in theory still be achievable, but for now I will leave this open-ended. Once a month could also be a good option, meaning 12 issues a year.

Formatting

I want to deliver the content in several formats, something that was not originally offered.

Web

I will use the standard CoffeeSpace formatting for the articles, but this will likely not be the best way to view the articles. I suspect it will still be the main way in which most articles are consumed.

With the web-based version will be the automatically generated audio as the other articles also have. At some point I will invest more time into making this experience better.

PDF

I want to maintain the printable PDF version of the article, I really liked this version. The A5 format I suspect is actually quite a nice format still - not too much information on each page. That would make for three columns. I would also suggest that one column is a magazine, two-columns is an engineering paper, making three columns a more distinct format.

In terms of link shortening, I think I will stop bothering to do this. Using an third-party service will only make the article less robust in the future. What I think I will do is make use of either footnotes or citations. I've been a fan of the IEEE citation style for quite some time and may adopt that here.

I will also likely try to archive each of the links in the article with the Internet Archive Wayback Machine. Years ago I also came up with a way of compressing text-based websites into a Base64 Data URI, but I'm not sure this will be useful or portable.

For page length, I think I will aim for a multiple of two pages, two pages being the limitation. This should be relatively easy to achieve with images, larger references and attribution.

I will likely be looking towards pandoc to automate this entire process.

Images

I want to add more rich content to the articles in the form of images. The aim would be to keep this as lightweight and easily printable in a traditional printer as possible.

I've been meaning to explore image dithering for quite some time now as a method for compressing images whilst maintaining understandability 1. Not only is it great for saving on bandwidth, but also it's great for saving CO2 emissions related to delivering server content 2.

The original title image is 1033187 bytes:

Original title image

Original title image

Using the following dithering command we can further reduce the file size:

0001 convert image -colorspace gray +dither -colors 2 -type bilevel result

We can compress this image down further to 57388 bytes (JPG):

Dithered image (JPG)

Dithered image (JPG)

And even further down to 4558 bytes (PNG):

Dithered image (PNG)

Dithered image (PNG)

This isn't the first time I've worked on this problem either. Applying it to the image in the original image compression article, using the following command:

0002 convert -strip -resize 256x256 gamestop.jpg -monochrome +dither -colors 2 -type bilevel test.png
GameStop stocks meme (original)

GameStop stocks meme (original)

GameStop stocks meme (compressed)

GameStop stocks meme (compressed)

We compress the image down to 2716 bytes. I believe that it not only looks respectable, but also has some nice print aesthetic to it.

I've written the following bash alias to ~/.bash_aliases (you'll need to reload your bash source and possible edit your .bashrc file to load the aliases if not already set):

0003 alias imagecompress='function _imagecompress(){ target="${1%.*}-cmp.png"; convert -strip $1 -monochrome +dither -colors 2 -type bilevel $target; };_imagecompress'

I can therefore compress an image by just running something like:

0004 $ ls -la gamestop.jpg
0005 28086 gamestop.jpg
0006 $ imagecompress gamestop.jpg
0007 $ ls -la gamestop-cmp.png
0008 13029 gamestop-cmp.png

As you can see, we can see quite a large drop in file size, even without resizing the original files. In the future I will look to somewhat automate the compression of images.

Next Steps

I soon do some travelling, but scripting this could be a fun low-effort side project.

I think that in order for this project to be viable, writing from start to finish should take no longer than 6 hours. I believe generally the following needs to be done:

  1. Development environment - Make sure that I have repeatability to generate articles, it's no fun getting an unexpected update just as you are about to publish.
  2. Markdown to PDF - Make use of a templating system to generate the final article, compress images, generate links, auto-archive links, etc.
  3. Markdown to HTML - Ensure there is a process to upload the articles to the website.
  4. Stylisation - Invest some effort into getting the look and feel right.
  5. Realistic schedule - Create a maintainable schedule for publications.

Watch this space...


  1. I will likely do a more in-depth article on this at a later date.

  2. It also means my server does less work loading content from disk, compressing it and serving it to you.