taw's blog

The best kittens, technology, and video games blog in the world.

Saturday, February 17, 2018

Review of Star Trek: The Orville


It's usually not worth writing reviews of works with a lot of existing reviews. Either you agree with the consensus, and so provide very little information beyond what's already there, or you disagree, and the review probably says more about your taste than about the subject.

Lack of consensus

Not this time. There's no consensus at all. Critics absolutely hate The Orville:
While the audience loves it:
This is extremely unusual. Audiences and critics usually agree very closely.

Meanwhile, Star Trek: Discovery has far higher critics score:
and very mediocre audience score:

So what's the deal here?

It's simple, the critics are total idiots, and the audience is completely right. Star Trek: The Orville is awesome.

The review

The Orville is 90% classic Star Trek and 10% Family Guy. This combination is just too sophisticated for the stupid critics, but it works really well.

Star Trek was always a very optimistic and lighthearted world. Even its "dark" parts like DS9 would look like a comedy by standard of today's grimdark TV. Dark moments like In the Pale Moonlight were only so effective because they were used to sparingly against the backdrop of endless Ferengi get rich schemes, Odo and Quark's cat and mouse games, shenanigans by the station's kids, Kai Winn's pettiness, and so many other ways which made the future look so much brighter than world of today.

And it couldn't have worked any other way. Star Trek has to be bright for moral dilemmas to be engaging. If the whole world is a grimdark shithole, moral issues become irrelevant, and only survival matters.

The Orville continues doing what Star Trek was great at. Its universe, while technically not in Star Trek canon, is just like it - full of imperfect people still trying to be their best. They explore the universe, run into interesting aliens of the week, and seriously deal with new moral dilemmas just like in previous Star Trek series. It feels like it's actually doing a better job, with choices characters make having far more serious consequences, instead of being promptly forgotten by the next episode.

Sure, previous Star Trek series didn't employ Family Guy style humor, but 80s'/90s' sensibilities wouldn't work on today's TV, and this change is far less drastic than turning Star Trek into another grimdark series like everything else.

It's a must-watch for every Star Trek lover. It's still trying to perfect its formula, but it might very well end up as the best Star Trek series ever.

The bonus review of Star Trek: Discovery

To be fair, I only watched the first episode, but that should be quite telling as I watched every other Star Trek series and movies, many multiple times.

Discovery is simply not Star Trek. Maybe I'd have liked it if it didn't pretend to be one, but it's too late now.

It's just ridiculously grimdark. The characters, the antagonists, the plot, everything on the screen is just ridiculously dark. It hurts to even look at the screen. There's not one scene in the whole episode where the lights are properly on - it's all dark greys and blacks everywhere.

It's so ridiculously Not Star Trek that there's a damn mutiny in the first episode, by the first officer who wants to start a war with the "Klingons"!

As if that's not enough, for some reason it ditches Star Trek races - Discovery "Klingons" look and act nothing like Star Trek Klingons, there's a few Vulcans in the background, but they decided to ignore whole Star Trek Vulcan canon anyway, and there's some humans and some weird new types nobody cares for.

As far as I can tell - and maybe that changes later - it's doesn't even follow the crew ensemble formula, instead focusing hard on single character, who's a woman most annoyingly named Michael.

So why does it even call itself Star Trek? That sets up expectations it's completely unwilling to meet. Perhaps it could be a decent space show in its unique universe, instead it decided to steal the name and then do something completely unrelated with it.

There was nothing redeeming about the whole episode, and I doubt very much it gets any better later.

Just don't bother watching it.

Saturday, February 10, 2018

Let's Play XCOM: Enemy Unknown

It's time for another classic game - XCOM: Enemy Unknown!

The Long War mod would make it last about 200-300 episodes instead of more reasonable number like 40 or so, and let's be honest - there's no way in hell I'd be able to finish it. Even people with a lot more time than me left a lot of Long War let's plays incomplete.

I got a bunch of minor mods, two most interesting are:

  • Tweaking perk tree, and perk pool for training roulette. This mostly fixes Snipers by making Squadsight the first perk, and delaying Headshot until major-tier perk. It also makes Supports into reliable medics by removing medic skills from the pool.
  • Making alien line of sight a bit shorter to occasionally enable ambushes. This could probably be cheesed by the kind of people who like counting tiles, but I never felt like doing so.
The rest is largely UI. Playing on Classic difficulty, so there should be some challenge, especially since I probably forgot everything about the game by now.

The let's play is almost exclusively battles, with base management in between cut out, as you probably don't want to see me alt-tab to spreadsheet and wiki and counting how much power I need to build when and how many corpses I need to sell on grey market to get necessary space bucks for foundry upgrades.

Here's the first episode:



As usual, episodes coming out once a day, same time of the day, until we win or aliens do.

Monday, January 29, 2018

London cycle hire adventures

Lilith on bike seat by catmom42 from flickr (CC-NC-ND)
Today I felt like coming home with a cycle hire (commonly known as "Boris Bikes", officially now "Santanders Cycles" after a rebrand).

I went to a nearest dock, which the app claimed to be in perfectly fine order, and tried to get a bike. Error. Tried a few more times just in case, still error. I thought that's just app error, so I tried to use the terminal instead - which is really painful as it involves clicking same stupid confirmation dialogs about 25 times before it lets you rent the damn bike - but it was just as broken as the app.

Could the app somehow mark the dock as not working? Right...

So I walked to the next dock, that somehow wasn't broken, so I got a bike, cycled to Aldgate, and from there to the end of pompously named "Cycle Superhighway 2" at Stratford.

Stratford is a traffic hell, with no docks near the "Cycle Superhighway 2" or the station. The app claimed the nearest two docks had zero spaces left.

There was one about 10 minutes away from the station which claimed to have 4 free spaces. I went there - wasting tons of time going in circles because direct route is not possible anywhere near Stratford - and there was only one space, which didn't take the bike in spite of repeated attempts, for whichever reason. First it flashed red, then it just gave up.

After a lot of loud cursing, I had to cycle to the next dock, which fortunately somehow had free spaces, probably because it was really damn far from the station.

The cycling itself was fine, just everything else about the experience was totally miserable, and of course they charged me extra for wasting my time due to broken docking stations.

Which is pretty much what I learned to expect from the TfL. Unionized 💩💩💩.

I really hope some competition with dockless bikes comes to London. Right now none of these new schemes allow realistic commuting between Central London and zones 2/3, but there's some hope for the future.

Tuesday, January 23, 2018

Let's Play Civilization 5 as Rome

It's been a long while since I last recorded any Let's Plays, so here's a new one!

It turns out not everybody switched to Civilization 6, and in fact new mods for Civilization 5 keep coming out. This campaign features one of them - 5th Unique Component. Other mods just as before.

As Rome, our unique abilities are:

  • 25% faster production in other cities of every building which is already built in our capital
  • Ballista - unique Catapult replacement - slightly stronger
  • Legion - unique Swordsman replacement - slightly stronger, can build roads and forts
  • Forum - unique Market replacement - +10% Great Person
  • Thermae - unique Garden replacement - +1 Science, +1 Culture, +1 Food.
  • Aedes - unique Temple replacement - +1 Global Happiness, cheaper to build, costs no maintenance, and doesn't require a Shrine.
Overall these feel like mid-tier or a bit below abilities. Faster production scales with number of cities, and Aedes gives us a bit of happiness breathing room, so together they support a bit of expansion. Unique units come too early and are of wrong kind to have much impact, early wars are dominated by Composite Bowmen supported by Horsemen.

I'd definitely love to hear feedback, especially technical feedback - I'm recording at higher resolution, and I'm not totally sure if I set OBS correctly for it.


One video a day, full playlist going to be here.




Saturday, January 20, 2018

New Hash methods in Ruby 2.5 and hash-polyfill

The Cat by marcinlachowicz.com from flickr (CC-NC-ND)

Ruby 2.5 includes a bunch of new Hash methods:
  • Hash#slice
  • Hash#transform_keys
  • Hash#transform_keys!
The first two do exactly what you'd expect them - and the same ActiveSupport's methods with the same names do. In case of key conflict Hash#transform_keys will quietly overwrite keys, which is somewhat questionable behaviour, but it's not like there's obvious better way.

Unfortunately Hash#transform_keys! took some shortcuts, resulting in rather questionable behaviour. I submitted it as a bug report, and hope they fix it soon, but to be honest track record of my Open Source bug submissions is rather poor.

I'm really surprised Hash#compact wasn't included.

If you want to use these new methods in older Ruby versions, or if you want to use methods from future rubies like Hash#compact, Hash#select_values, Hash#select_keys etc., I updated hash-polyfill gem too.

I did not include Hash#transform_keys! in the gem as its unclear if it will have corrected or current questionable behaviour long term.

Tuesday, December 19, 2017

Challenges for November 2017 SecTalks London

Christmas Luke by Nicholas Erwin from flickr (CC-NC-ND)

Following highly successful September round of London SecTalks, I ran another round in November.

The round consisted of 8 tasks, and they were a bit harder this time, with even the winner only finishing 7 in time - a few people completing the challenges only after time.

You can find challenges and code used to generate them in this spoiler-free repository.

This post doesn't contain answers, but it might spoil a bit.

Archive (5 points)

It was nearly identical to previous round's archive challenge - 16-level nested archive, with 1 real and 15 fake archives on every level. The only difference was that distraction files were 0-padded to have same size as the real file, which forced smarter strategy than simply going for the largest file every level.

Of course MD5ing to find unique file, or just unpacking them all and removing duplicate files still worked.

CSS (10 points)

The password was encoded within CSS rules. I've never seen this kind of challenge anywhere, so maybe it's the world's first?

It was very short, and every character was independent, so it seems that everyone just manually brute forced it.

Secret Message (15 points)

The answer was written in one color on background of another extremely similar color. Everybody managed to finish it so quickly, I didn't even have a chance to see what kind of tools they used to solve it.

EDIT: Oops, it seems that I messed up ImageMagick options and also accidentally left the answer in EXIF.

Python (20 points)

As we all know Python is a whitespace-sensitive language. So I encoded some secrets in the whitespace.

Quite a few people used editors which cleaned up whitespace automatically, messing up with the file. Once a person figured out what the challenge is about, it wasn't usually too hard to solve it.

Ruby (25 points)

Obfuscated Ruby challenge was the hardest one of the round. It used two layers of Unicode obfuscation, first with emoji, and then with CJK characters. Other than using unusual characters, obfuscations applied weren't particularly hard.

ECB BMP (30 points)

This was a fun one. It was basically a version of the famous ECB penguin from Wikipedia.

People had a lot of trouble figuring out dimensions and bit depth of the image, which had to be given as a hint, even thought they were fairly usual.

XOR GIF (35 points)

This was a two step challenge. A GIF file was xor-encrypted with a word from a dictionary.

The challenge was then to find out which Twitter account the image is from.

Since GIF header is known, it was very easy to figure out the first few letters of the key. However, people had a lot of trouble completing it, as the word I've chosen was only in some dictionaries. This wasn't intentional.

After getting the image, it turns out only some reverse image search could find it properly, and others returned bogus matches.

ROT Word (40 points)

I wanted to have a task for statistical analysis of some classical cipher, but all the real ones have online tools you can use to solve them in a few seconds.

So I made up one - it's like rot cipher with multi-letter key, except each letter is used for whole word, not for one letter.

encrypt("All your base are belong to us!", "omg") == "ozz kagd hgyk ofs nqxazs zu ig"

For a bit of extra challenge the message was in English, but contained a bunch of non-English proper names.

Final Thoughts

I made this one just a bit harder, and maybe it was a tiny bit too much.

Overall, a lot of fun happened.

I'd definitely recommend CTFd server for this.

Tuesday, November 28, 2017

How to watch high speed let's plays on London Underground

le petit chat by FranekN from flickr (CC-NC-ND)

Apparently the idea that some places - like London trains - are offline - never occurred to anyone in California or Seattle or wherever people who write mobile software tend to live. And even support for high speed playback is not quite as common as it should be.

So I came up with a process to deal with it - which even with all the scripts to automate it still has far too many steps. I'm not saying I recommend it to anyone, but maybe someone needs to do something similar, and they might use it as a starting point.

Download let's plays

First, let's find a bunch of let's plays we want to watch, it's best to use playlists instead of individual videos to reduce URL copy and pasting time, but it works for both.

To download them we can use youtube-dl, which is available as a homebrew package (brew install youtube-dl), or you can get it from here.

$ youtube-dl -t -f "bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best"
  "url1" "url2" "url3"

Youtube offers videos in many formats, and arguments above are what I found to result in highest quality and best compatibility with various video players. Default settings often ended up causing issues.

Speed up the videos

There's plenty of command line tools to manipulate audio and video, and they tend to have ridiculously complicated interfaces.

I wrote speedup_mp3 script (available in my unix-utilities repository) which wraps all such tools to provide easy speedup of various formats - including of video files.

The script uses ffmpeg to speedup videos - as well as sox and id3v2 to deal with audio files if you need to do so to some podcasts. All those dependencies you can satisfy with brew install ffmpeg sox id3v2.

The script can speedup a whole directory of downloaded videos at once by 2.0x factor:

$ speedup_mp3 -2.0 downloaded_videos/ fast_videos/

Adjust that number to your liking. Factors higher than 2.0 are not currently supported, as ffmpeg requires multiple speedup rounds in such case. I plan to add support for that later.

The process takes a lot of time, so it's best left overnight to do its thing.

The script already skips videos which exist in target directory, so you can add more videos to the source, run it again, and it won't redo videos it already converted.

Put them on Dropbox

Infuriatingly there doesn't seem to be any good way to just send files to Android tablet from a laptop over WiFi. There used to be some programs, but they got turned into microtransaction nonsense.

If you already use Dropbox, the easiest way is to put those sped up files there. This step is a bit awkward of course, as video files are big, people's upload speeds are often low, and free Dropbox plan is pretty small.

If that doesn't discourage you, open Dropbox app on your tablet, and use checkbox to make your files available offline. You don't need to wait for it to finish syncing, Dropbox should keep updating files as they get uploaded.

After that any video player works. I use VLC. Just open Dropbox folder, click on the video, it will open in VLC and play right away. First time you do it, make sure to set VLC as default app to avoid an extra dialog.

Isn't this ridiculously overcomplicated?

Yeah, it sort of is. Some parts of it will probably get better - for example speed controls on video/audio playback are getting more common, so you could skip that part (watching at 100% speed is of course totally silly). It still makes some sense to pre-speedup to save space and battery on the device, as faster files are proportionally smaller, but if you feel it's not worth the hassle, you can probably find a video player with appropriate functionality.

TfL showed zero interest in fixing lack of connectivity on London underground, and mobile ecosystem assumes you're always online or everything breaks, so this part will probably be a major pain point for very long time.

The part I find most embarrassing is lack of any builtin way to just send files over to a device. Hopefully this gets fixed soon.

Saturday, November 18, 2017

10 Unpopular Opinions

Cat by kimashi tower from flickr (CC-BY)

I posted these on twitter a while back on Robin's request, but I wanted to elaborate a bit and give some context.

The list avoids politics, anything politics-adjacent like economics, and is not about just preferences.

If these turn out to be not controversial enough, I might post another list sometime in the future.

Listening to audio or watching videos at 100% speed is a waste of life

People speak very slowly, and for a good reason. When you talk with another person you need to not just process what they said, you're also preparing your responses, considering their reaction to your responses, and so on. With more than two people involved, it gets even more complicated.

None of this applies when you're just listening to something passively. Using audio speeds designed to leave you with enough brainpower to model your interlocutor when there's no interlocutor to model is just wasting it.

It will probably take a while to get used to it, but just speed it up, 200% should be comfortable for almost every content - podcasts, audiobooks, let's plays, TV etc. At first I used slight speedups like 120%, but I kept increasing it.

Side effect of this is that you might end up listening to music at higher speeds too (I end up using 140%), and people find this super weird.

I recommend this Chrome extension to control exact playback speed. It works with all major video sites.

I also wrote speedup_mp3 command line tool for podcasts and audiobooks, but nowadays most devices have builtin methods.

Oh and back in analog days, speeding up audio messed up the pitch, so everything sounded funny. It's not true for modern methods.

Any programmer who does not write code recreationally is invariably mediocre at best

This comes up every now and then on sites like reddit and the masses of mediocre programmers are always like "oh it's totally fine to just code at work". It's not.

Coding is unique in its ability to change the world - with even tiny amounts of effort you can affect reality. If someone never codes recreationally, this means one of:
  • They're so content they never needed or wanted to create something that didn't exist before
  • They coded some stuff, but never bothered to Open Source it
  • They'd like to, but they're just not good enough
So when you're hiring, all CVs without github link should go straight to the bin.

Couldn't be bothered to Open Source used to be sort of excusable, but it's nowadays just so easy to push something to github, Signaling 101 strongly implies people without github account are just bad.

And that applies to even junior / graduate roles. Even if you don't have anything amazingly useful to show yet, you can still share as you learn.

Avoidance of suffering can't be basis of morality - if it was, knocking out a few pain genes would be highest moral imperative

Nobody buys morality systems based on "God said so" or "Kant said so", and when people spend too much time on utilitarianism, they run into all kinds of problems.

So it became fashionable to ignore all pleasurable parts of utilitarianism, and just focus on minimizing suffering.

This is a total nonstarter. "Pain" and "suffering" are not exactly the same thing, but if you want to minimize suffering getting rid of pain is pretty much mandatory, and it's just a few simple gene edits to abolish it completely.

So far nobody's interested in researching some gene edits for humans or animals to get rid of pain, so by revealed preference they don't actually buy their own stated believes that avoidance of suffering is terribly important.

An obvious objection might be that people with congenital insensitivity to pain keep getting themselves in physically dangerous situations, but it's completely irrelevant. They live in the world of pain-sensitive people, which is currently full of objects dangerous to people without pain sensitivity. It would take very modest effort to redesign common risk factors for greater safety, and establish cultural norms to always see medical help just in case whenever something unusual is happening to one's body, not just when it's painful (since nothing ever will be).

Or even if that as somehow unachievable, we could simply reduce pain sensitivity without completely losing it as a signal. If it was really key to all morality, science should drop everything and focus on it.

Any takers? No? I thought so.

Mobile "games" are closer to fidget spinner than to real games

As a proud gamer, I find it infuriating that people call those mobile things "games".

It's not that they're bad games - I have no problem with bad games. They are not games.

For a good analogy, let's say you're into movies. And then someone is like "oh I totally love movies, I put news on tv playing in the background every morning while I get ready to go out". Ridiculous, isn't it? Somehow everybody else is spared from this nonsense, except gamers.

A game - like a movie - is something you actually get fully into. In game time, or movie time.

A mobile "game" - like background TV news - is something happening part-time mentally, just to fill otherwise dead time. Like on a train, in a long queue, or otherwise when you can't do anything better.

You know what mobile "games" are closer to? Fidget spinners. Rubik cubes. Sudokus. Toys. Not games.

That's not to say there aren't some legit games on mobile platforms, like let's say Hearthstone. They have nothing in common with all that fidget spinnery stuff.

Future medicine will develop easy fitness pill/hardware, and modern diet/exercise obsession will be wtf-tier to them

We evolved in very different world, and recently nearly everyone all over the world is getting overweight, horribly unfit, and suffering from all kinds of chronic conditions as a result.

Currently the best way people have to deal with it is to go on ever crazier diets, spend billions on "healthy" food and weight loss preducts, spend hours every week in gyms, and all that effort has at most modest effect.

But why is any of that even remotely necessary? You already have all the genes necessary to be fit, healthy, and attractive (and if you don't, most simple genetic problems can be fixed with simple medical interventions). If that fails, it's because something about current environment messes up with your body's regulatory system so much the result is failure to achieve your biological potential.

Contrary to "calorie" nonsense, all the dieting and exercise is just attempt to make your regulatory system work more like it's evolutionarily designed to.

At some point we'll inevitably figure out some ways to monitor and affect that body's regulatory system directly, skipping this insanity of self-denial and waste of endless hours for very modest result.

For a good example, consider 20th century's biggest health menace - smoking cigarettes. It led to enormous social campaign, punitive taxation, and in some specially evil countries like UK government is literally using death panels against smokers. Then vaping came, and you can get basically all the benefits of smoking cigarettes with basically zero of health risks.

Problem is completely solved. Well, at least it would be if governments and society fully embraced vaping instead of treating it as smoking tier evilness.

For older example, people used to have crazy complicated dietary cleanliness rules to reduce their exposure to pathogens. All forgotten now, except among religious nuts. Food sold in supermarkets is pathogen free, we moved on.

We already have some examples of this direct approach working - stomach surgery has far stronger and immediate results than all kinds of diets and exercise put together with zero effort needed - and there are less invasive methods in development.

There were also a lot of pills which improved fitness and reduced obesity greatly, but they foolishly keep getting themselves banned due to rare side effects, or as part of the evil War on Drugs.

Or alternatively maybe sexbots are going to get so good everyone is going to get many hours of intense exercise every night without any self-denial. But whichever way, it's going to get solved.

MongoDB figured out the one true way to represent data as JSON documents - now if only everything else about it was any good

Relational database are sort of insane. They essentially model data as a collection of Excel spreadsheets. There's some irrelevant mathematical nonsense like relational calculus, but it only has the most remote relationship with RDBMSs.

Would you consider writing a program where the only data type was Excel spreadsheet? What kind of question is that, obviously not, yet a lot of you use relational database, and some ORM to make those Excel spreadsheets look kinda like something more useful, and it's painful.

Sure, they have a lot of nice stuff on top of that Excel spreadsheets - like ability to merge multiple Excel spreadsheets into a new temporary Excel spreadsheets, but that's all they ever do.

We don't need any of that. MongoDB style storing data as collections of JSON documents is close to perfect as it gets. And its performance can be pretty amazing.

It's just not very good for anything else. Lack of good query language, and the silly thing of building JSON query trees is not even remotely acceptable. Take a look at this website which translates very simple SQL into MongoDB queries. They are insane.

If we had MongoDB style data modelling, and good query language on top of it, it would win all database wars.

By the way, programs which literally use Excel as their backend engine are an actual thing.

Farm animals are generally better off than wild animals - enjoy that chicken

Wild animals live on edge of Malthusian equilibrium - with lives just tolerable enough to survive, generally on edge of starvation, death by predation, or by disease. And in times of abundance, they just fight for status in their pack, with a lot more losers than winners. It's not a great life.

None of that applies to domesticated animals. They have safety, abundance of food, freedom from disease, and their lives end as painlessly as possible in their prime, saving them from degenerations of old age.

That's not to say their lives are anywhere near optimized for greatest happiness, but by any dispassionate evaluation the contrast in really one sided.

And it's not like going vegan would somehow reduce suffering - those cows and chickens would simply never exist.

So enjoy the meat.

Popularity of javascript won't last long - compiling real languages to web assembly is near future

Javascript was never meant as a "real" general purpose language. It was created for 10 line hacks to validate some online forms and other such trivial things, and it was perfectly adequate for it. Then jQuery happened, and it turned even more into special purpose language for browser APIs.

Thanks to great success of web browsers as a platform, it somehow managed to tag along and is enjoying temporary time of popularity, being used for things far bigger than it's reasonable to.

But Javascript has no real competitive advantages. All advantages are in browser APIs, and any language which compiles to something browsers can run can use them.

Right now the Web has a mix of:
  • sites with old style trivial Javascript, jQuery, and simple plugins like Facebook buttons - that's close to 99% of the web
  • sites with new Javascript frameworks - they're so rare you can't even see them in popularity statistics except Angular somehow gets over 1% mark
  • very small number of high profile custom written sites like Google Maps and Gmail
There are two orders of magnitude gaps between these categories.

Anyway, the interesting thing is that in framework world, people already abandoned Javascript, and use various Javascript++ languages like CoffeeScript, JSX, TypeScript, whatever Babel does etc. And it's all compiled, with browser never seeing raw code.

This is all intermediate situation, and the only long term equilibrium will be Javascript++ being displaced by actual programming languages like Ruby or Python.

Right now all ways to use them in browser like Opal are in infancy, but when you look at numbers, everything about Javascript frameworks is in its infancy.

Widespread piracy alternative motivated game companies to treat gamers well - less piracy led to anti-gamer behaviour like loot boxes

Video game piracy is much less common than it used to be. There are many factors, both positive and negative - Steam and other online retailers made it far easier to buy games without waiting a week for the box, there are many discount sites and promotions so even people with less money can buy legit games, many games focus on online play and that's harder for pirates to emulate, there's been aggressive DRM effort that mostly worked on consoles, and is even causing some delays on PCs, and popular pirating sites keep getting shut down or get infected by malware.

It's still possible to pirate, but it all adds up to much lower rates (unlike let's say TV shows, where it's as rampant as ever). Whatever the reasons, the result is horrible for gamers.

Back when everyone had alternative of easy piracy, companies were essentially forced to treat gamers well, as any bullshit would just lead to alt-tab to The Pirate Bay. Now that piracy is much more niche, companies can do whatever they want.

Day one DLCs, DLCs which are basically bugfixes, DLCs while the game is still in Early Access, DLCs and cost $500+, all kinds of Pay-to-Win schemes, lootboxes, all that crap is happening not because companies are getting greedier, but because abused gamers are less likely to exercise the pirate option than in the past.

There are no easy ways. Outrage campaigns just slow down these abusive practices. Platforms like Steam could ban some of the worst abuses, and in theory even game rating agencies and governments could intervene, for example treating lootboxes as gambling, and completely ban it for under-18s games. In practice governments are by anti-gamer old people, and they're more likely to cause even more harm.

Keeping piracy option alive is the best way we have if we want to be treated with dignity.

For all its historical significance, apt-get is not really a good package manager

Twenty years ago Linux's main selling point were package managers like apt-get. You didn't need to download software from twenty sites, and chase incompatibilities, you just typed one command and it was all setup properly. It even upgraded everything with one command, rarely breaking anything in process.

It was amazing. It also didn't age well.

Just to cover some difference between modern (mostly OSX) environment:
  • There's no reason for admin access to install most software
  • Programs are self-updating
  • Many programs have some kind of plugin system
  • Pretty much every programming language has its own package system
  • Quite often you need to install multiple versions
apt-get really doesn't deal with any of it.

A while ago I'd have thought it's really funny, but OSX-style package management like Linuxbrew and Nix are now a thing.

On Cloud servers people usually use language-specific package managers, or nowadays even occasionally something like Docker.

Either way, Linux is still not on desktop. I guess lack of usable graphics card drivers in any distro might be among the reason.

Wednesday, November 01, 2017

Architecture of z3 gem

Kitten by www.metaphoricalplatypus.com from flickr (CC-BY)

This post is meant for people who want to dig deep into Z3 gem, or who want to learn from example how to interface with another complex C library. Regular users of Z3 are better off checking out some tutorials I wrote.

Architecture of z3 gem The z3 theorem prover is a C library with quite complex API, and z3 gem needs to take a lot of steps to provide good ruby experience with it.

Z3 C API Overview

The API looks conventional at first - a bunch of black box data types like Z3_context Z3_ast (Abstract Syntax Tree), and a bunch of functions to operate on them. For example to create a node representing equality of two nodes, you call:

Z3_ast Z3_API Z3_mk_eq(Z3_context c, Z3_ast l, Z3_ast r);

A huge problem is that so many of those calls claim to accept Z3_ast, but it needs to be particular kind of Z3_ast, otherwise you get a segfault. It's not even static limitation - l and r can be anything, but they must represent the same type. So any kind of thin wrapper is out of the question.

Very Low Level API

The gem uses ffi to setup Z3::VeryLowLevel with direct C calls. For example the aforementioned function is attached like this:

attach_function :Z3_mk_eq, [:ctx_pointer, :ast_pointer, :ast_pointer], :ast_pointer

There's 618 API calls, so it would be tedious to do it manually, so instead a tiny subproject lives in api and generates most of it with some regular expressions. A list of C API calls is extracted from Z3 documentation into api/definitions.h. They look like this:

def_API('Z3_mk_eq', AST, (_in(CONTEXT), _in(AST), _in(AST)))

Then api/gen_api script translates it into proper ruby code. It might seem like it could be handled by ffi library, but there are too many Z3-specific hacks needed. A small number of function calls can't be handled automatically, so they're written manually.

For example Z3_mk_add function creates a node representing addition of any number of nodes, and has signature of:

attach_function :Z3_mk_add, [:ctx_pointer, :int, :pointer], :ast_pointer

Low Level API

There's one intermediate level between raw C calls and ruby code. Z3::LowLevel is also mostly generated by api/gen_api. Here's an example of automatically generated code:

def mk_eq(ast1, ast2) #=> :ast_pointer
  VeryLowLevel.Z3_mk_eq(_ctx_pointer, ast1._ast, ast2._ast)
end

And this one is written manually, with proper helpers:

def mk_and(asts) #=> :ast_pointer
  VeryLowLevel.Z3_mk_and(_ctx_pointer, asts.size, asts_vector(asts))
end

A few things are happening here:
  • Z3 API requires Z3_context pointer for almost all of its calls - we automatically provide it with singleton _ctx_pointer.
  • We get ruby objects, and extract C pointers from them.
  • We return C pointers FFI::Pointer and leave responsibility for wrapping them into ruby objects to the caller, as we actually don't have enough information here to do so.
Another thing Z3::LowLevel API does is setting up error callback, to convert Z3 errors into Ruby exceptions.

Ruby objects

And finally we get to ruby objects like Z3::AST, which is a wrapper for FFI::Pointer representing Z3_ast. Other Z3 C data types get similar treatment.

module Z3
  class AST
    attr_reader :_ast
    def initialize(_ast)
      raise Z3::Exception, "AST expected, got #{_ast.class}" unless _ast.is_a?(FFI::Pointer)
      @_ast = _ast
    end

    # ...

    private_class_method :new
  end
end

First weird thing is this Python-style pseudo-private ._ast. This really shouldn't ever be accessed by user of the gem, but it needs to be accessed by Z3::LowLevel a lot. Ruby doesn't have any concept of C++ style "friend" classes. I've chosen Python pseudo-private convention as opposed to a lot of .instance_eval or similar.

Another weird thing is that Z3::AST class prevents object creation - only its subclasses representing nodes of specific type can be instantiated.

Sorts

Z3 ASTs represent multiple things, mostly sorts and expressions. Z3 automatically interns ASTs, so two identically-shaped ASTs will be the same underlying objects (like two same Ruby Symbols), saving us memory management hassle here.

Sorts are sort of like types. The gem creates a parallel hierarchy so every underlying sort gets an object of its specific class. For example here's whole Z3::BoolSort, which should only have a single object.

module Z3
  class Sort < AST
    def initialize(_ast)
      super(_ast)
      raise Z3::Exception, "Sorts must have AST kind sort" unless ast_kind == :sort
    end
    # ...

module Z3
  class BoolSort < Sort
    def initialize
      super LowLevel.mk_bool_sort
    end

    def expr_class
      BoolExpr
    end

    def from_const(val)
      if val == true
        BoolExpr.new(LowLevel.mk_true, self)
      elsif val == false
        BoolExpr.new(LowLevel.mk_false, self)
      else
        raise Z3::Exception, "Cannot convert #{val.class} to #{self.class}"
      end
    end

    public_class_method :new
  end
end

ast_kind check is for additional segfault prevention.

BoolSort.new creates Ruby object with instance variable _sort pointing to Z3_ast describing Boolean sort.

It seems a bit overkillish to setup so much structure for BoolSort with just two instance values, but some Sort classes have multiple Sort instances. For example Bit Vectors of width n are:

module Z3
  class BitvecSort < Sort
    def initialize(n)
      super LowLevel.mk_bv_sort(n)
    end

    def expr_class
      BitvecExpr
    end    

Expressions

Expressions are also ASTs, but they all carry reference to Ruby instance of their sort.

module Z3
  class Expr < AST
    attr_reader :sort
    def initialize(_ast, sort)
      super(_ast)
      @sort = sort
      unless [:numeral, :app].include?(ast_kind)
        raise Z3::Exception, "Values must have AST kind numeral or app"
      end
    end

This again might seem like an overkill for expressions representing Bool true, but it's extremely important for BitvecExpr to know if it's 8-bit or 24-bit. Because if they get mixed up, segfault.

Building Expressions

Expressions can be built from constants:

IntSort.new.from_const(42)

Declared as variables:

IntSort.new.var("x")

Or created from one or more of existing expression nodes:

module Z3
  class BitvecExpr < Expr
    def rotate_left(num)
      sort.new(LowLevel.mk_rotate_left(num, self))
    end

As you can see, the low level API doesn't know how to turn those C pointers into Ruby objects.

This interface is a bit tedious for the most common case, so there are wrappers with simple interface, which also allow mixing Z3 expressions with Ruby expressions, with a few limitations:

Z3::Int("a") + 2 == Z3::Int("b")

For some advanced use you actually need the whole interface.

Creating Sorts and Expressions from raw pointers

For ASTs we construct we track their sorts. Unfortunately sometimes Z3 gives us raw pointers and we need to guess their types - most obviously when we actually get a solution to our set of constraints.

Z3's introspection API lets us figure this out, and find out proper Ruby objects to connect to.

It has unfortunate limitation that we can only see underlying Z3 sorts. I'd prefer to have SignedBitvectorExpr and UnsignedBitvectorExpr as separate types with nice APIs, unfortunately there's no way to infer if answer Z3 gave came from Ruby SignedBitvectorExpr or UnsignedBitvectorExpr, so that idea can't work.

Printer

Expressions need to be turned into Strings for human consumption. Z3 comes with own printer, but it's some messy Lisp-like syntax, with a lot of weirdness for edge cases.

The gem instead implements its own printer in traditional math notation. Right now it sometimes overdoes explicit parentheses.

Examples

The gem comes with a set of small and intermediate examples in examples/ directory. They're a good starting point to learn common use cases.

There are obvious things like sudoku solvers, but also regular expression crossword solver.

Testing

Testing uses RSpec and has two parts.

Unit tests require a lot of custom matchers, as most objects in the gem override ==.

Some examples:

let(:a) { Z3.Real("a") }
let(:b) { Z3.Real("b") }
let(:c) { Z3.Real("c") }
it "+" do
  expect([a == 2, b == 4, c == a + b]).to have_solution(c => 6)
end

Integration tests run everything in examples and verify that output is exactly as expected. I like reusing other things as test cases like this.

How to properly setup RSpec

kitten by trash world from flickr (CC-NC-ND)

This post is recommended for everyone from total beginners to people who literally created RSpec.

Starting a new project

When you start a new ruby project, it's common to begin with:

$ git init
$ rspec --init

to create a repository and some sensible TDD structure in it.

Or for rails projects:

$ rails new my-app -T
$ cd my-app

Then edit Gemfile adding rspec-rails to the right group:

group :development, :test do
  gem "rspec-rails"
end

And:

$ bundle install
$ bundle exec rails g rspec:install

I feel all those Rails steps really ought to be folded into a single operation. There's no reason why rails new can't take options for a bunch of popular packages like rspec, and there's no reason why we can't have some kind of bundle add-development-dependency rspec-rails to manage simple Gemfile automatically (like npm already does).

But this post is not about any of that.

What test frameworks are for

So why do we even use test frameworks really, instead of using plain ruby? A minimal test suite is just a collection of test cases - which can be simple methods, or functions, or code blocks, or whatever works.

The most important thing test framework provides is a test runner, which runs each test case, gathers results, and reports them. What could be possible results of a test case?
  • Test case could pass
  • Test case could have test assertion which fails
  • Test case could crash with an error
And here's where everything went wrong. For silly historical reasons test frameworks decided to treat test assertion failure as if it was test crashing with an error. This is just insane.

Here's a tiny toy test, it's quite compact, and reads perfectly fine:

it "Simple names are treated as first/last" do
  user = NameParser.parse("Mike Pence")
  expect(user.first_name).to eq("Mike")
  expect(user.middle_name).to eq(nil)
  expect(user.last_name).to eq("Pence")
end

If assertion failures are treated as failures, and first name assertion fails, then we still have no idea what the code actually returned, and at this point developer will typically run binding.pry or equivalent just to mindlessly copy and paste checks which are already in the spec!

We want the test case to keep going, and then all assertion failures to be reported afterwards!

Common workarounds

There's a long list of workarounds. Some people go as far as recommending "one assertion per test" which is an absolutely awful idea which would result in enormous amounts of boilerplate and hard to read disconnected code. Very few real world projects follow this:

describe "Simple names are treated as first/last" do
  let(:user) { NameParser.parse("Mike Pence") }

  it do
    expect(user.first_name).to eq("Mike")
  end

  it do
    expect(user.middle_name).to eq(nil)
  end

  it do
    expect(user.last_name).to eq("Pence")
  end
end

RSpec has some shortcuts for writing this kind of one assertion tests, but the whole idea is just misguided, and very often it's really difficult to twist test case into a sets of reasonable "one assertion per test" cases, even disregarding code bloat, readability, and performance impact.

Another idea is to collect all tests into one. As vast majority of assertions are simple equality checks, this usually sort of works:

it "Simple names are treated as first/last" do
  user = NameParser.parse("Mike Pence")
  expect([user.first_name, user.middle_name, user.last_name])
    .to eq(["Mike", nil, "Pence])
end

Not exactly amazing code, but at least it's compact.

Actually...

What if test framework was smart enough to keep going after assertion failure? Turns out RSpec can do just that, but you need to explicitly tell it to be sane, by putting this in your spec/spec_helper.rb:

RSpec.configure do |config|
  config.define_derived_metadata do |meta|
    meta[:aggregate_failures] = true
  end
end

And now the code we always wanted to write magically works! If parser fails, we see all failed assertions listed. This really should be on by default.

Limitations

This works with expert and should syntax, and doesn't clash with any commonly used RSpec functionality.

It does not work with config.expect_with :minitest, which is how you can use assert_equal syntax with RSpec test driver. It's not a common thing to do, other than to help migration from minitest to RSpec, and there's no reason why it couldn't be made to work in principle.

What else can it do?

You can write a whole loop like:

it "everything works" do
  collection.each do |example|
    expect(example).to be_valid
  end
end

And if it fails somehow, you'll get a list of failing examples only in test report!

What if I don't like the RSpec syntax?

RSpec syntax is rather controversial, with many fans, but many other people very intensely hating it. It changed multiple times during its existence, including:

user.first_name.should equal("Mike")
user.first_name.should == "Mike"
user.first_name.should eq("Mike")
expect(user.first_name).to eq("Mike")

And in all likelihood it will continue changing. RSpec sort of supports more traditional expectation syntax as a plugin, but it currently doesn't support failure aggregation:

assert_equal "Mike", user.first_name

When I needed to mix them for migration reasons I just defined assert_equal manually, and that was good enough to handle vast majority of tests.

In long term perspective, I'd of course strongly advise every other test frameworks in every language to abandon the historical mistake of treating test assertion failures as errors, and to switch to this kind of failure aggregation.

Considering how much time a typical developer spends dealing with failing tests, even this modest improvement in the process can result in significantly improved productivity.