a t e v a n s . c o m

(╯°□°)╯︵ <ǝlqɐʇ/>

I’ve been working a lot with Hubot, which our company is using to manage our chat bot. We subscribe to the ChatOps mantra, which has a lot of value: operational changes are public, searchable, backed up, and repeatable. We also use Hubot for workflows and glue code - shortcuts for code review in Github, delivering stories in Pivotal Tracker when they are deployed to a demo environment, various alerts in PagerDuty, etc.

Hubot is written in CoffeeScript, a transpiles-to-javascript language that is still the default in Rails 5. CoffeeScript initially made it easy and obvious how to write classes, inheritence, and bound functions in your Javascript. Now that ES6 has stolen most of the good stuff from CoffeeScript, I think it’s lost most of its value. But migrating legacy code to a new language is low ROI, and a giant pain even with tools like Decaffeinate. Besides, most of the hubot plugins and ecosystem are in CoffeeScript, so there’s probably some advantage to maintaining compatibility there.

Hubot has a relatively simple abstraction over responding to messages in Slack, and has an Express server built-in. It’s basically writing a Node application.

Writing Clean Code

A chatbot is not external, and is often not super-critical functionality. It’s easy to just throw in some hacks, write very minimal tests (if any), and call it a day. At Hired we have tests for our Hubot commands, but we’ve never emphasized high-quality code the way we have in our main application. I’m changing that. Any app worth making is worth making well.

I’ve been trying to figure out how to break hubot scripts into clean modules. OO design is a hard enough problem in Ruby, where people actually care about clean code. Patterns and conventions like MVC provide helpful guidelines. None of that in JS land: it’s an even split whether a library will be functional, object-oriented, or function-objects. Everything’s just a private variable - no need for uppercase letters, or even full words.

While Github’s docs only talk about throwing things in /scripts, sometimes you want commands in different scripts to be able to use the same functionality. Can you totally separate these back-end libraries from the server / chat response scripts? How do you tease apart the control flow?

Promises (and they still feel all so wasted)

Promises are a critical piece of the JS puzzle. To quote Domenic Denicola:

The point of promises is to give us back functional composition and error bubbling in the async world.

~ You’re Missing the Point of Promises

I started by upgrading our app from our old library to Bluebird. The coolest thing Bluebird does is .catch(ErrorType), which allows you to catch only for specific errors. Combine that with the common-errors library from Shutterstock, and you get a great way to exactly classify error states.

I’m still figuring out how to use promises as a clean abstraction. Treating them like delayed try/catch blocks seems to produce clean separations. The Bluebird docs have a section on anti-patterns that was a good start. In our code I found many places people had nested promises inside other promises, resulting in errors not reaching the original caller (or our test framework). I also saw throwing exceptions used as a form of flow control, and using the error message of the exception as a Slack reply value. Needless to say, that’s not what exceptions are for.

Events

NodeJS comes with eventing built in. The process object is an EventEmitter, meaning you can use it like a global message bus. Hubot also acts as a global event handler, so you can track things there as well. And in CoffeeScript you can class MyClass extends EventEmitter. If you’ve got a bunch of async tasks that other scripts might need to refer to, you can have them fire off an event that other objects can respond to.

For example, our deploy process has a few short steps early on that might interfere with each other if multiple deploys happen simultaneously. We can set our queueing object to listen for a “finished all blocking calls” event on deploys, and kick off the next one while the current deploy does the rest of its steps. We don’t have to hook into the promise chain - a Deploy doesn’t even have to know about the DeployQueue, which is great decoupling. It can just do its waterfall of async operations, and fire off events at each step.

Storage

Hubot comes with a Brain built-in for persistent storage. For most users, this will be based on Redis. You can treat it like a big object full of whatever data you want, and it will be there when Hubot gets restarted.

The catch is: Hubot’s brain is a giant JS object, and the “persistence” is just dumping the whole thing to a JSON string and throwing it in one key in Redis. Good luck digging in from redis-cli or any interface beyond in-app code.

Someone (not me) added SQLite3 for some things that kind of had a relational-ish structure. If you are going to use SQL in your node app, for cryin’ out loud use a bloody ORM. Sequelize seems to be a big player, but like any JS framework it could be dead tomorrow.

Frankly, MongoDB is a much bigger force in the NodeJS space, and seems perfect for a low-volume, low-criticality app like a chatbot. It’s relational enough to get the job done and flexible enough with schema-less documents. You probably won’t have to scale it and deal with the storage, clustering, and concurrency issues. With well-supported tools like Mongoose, it might be easier to organize and manage than the one-key-in-Redis brain.

We also have InfluxDB for tracking stats. I haven’t dived deep into this, so I’m not sure how it compares to statsd or Elasticsearch aggregations. I’m not even sure if they cover the same use cases or not.

Testing

Whooboy. Testing. The testing world in JS leaves much to be desired. I’m spoiled on rspec and ruby test frameworks, which have things like mocks and stubs built in.

In JS, everything is “microframeworks,” i.e. things that don’t work well together. Here’s a quick rundown of libraries we’re using:

  • Mocha, the actual test runner.
  • Chai, an assertion library.
  • Chai-as-promised, for testing against promises.
  • Supertest-as-promsied, to test webhooks in your app by sending actual http requests to 127.0.0.1. Who needs integration testing? Black-box, people!
  • Nock, for expectations around calling external APIs. Of course, it doesn’t work with Mocha’s promise interface.
  • Rewire, for messing with private variables and functions inside your scripts.
  • Sinon for stubbing out methods.
  • Hubot-test-helper, for setting up and tearing down a fake Hubot.

I mean, I don’t know why you’d want assertions, mocks, stubs, dependency injection and a test runner all bundled together. It’s much better to have femto-frameworks that you have to duct tape together yourself.

Suffice to say, there’s a lot of code to glue it all together. I had to dive into the source code for every single one of these libraries to make them play nice – neither the README nor the documentation sufficed in any instance. But in the end we get to test syntax that looks like this:

describe 'PING module', ->
  beforeEach ->
    mockBot('scripts/ping').then (robot) =>
      @bot  = robot

  describe 'bot ping', ->
    it 'sends "PONG" to the channel', ->
      @bot.receive('bot ping').then =>
        expect(@bot).to.send('PONG')

The bot will shut itself down after each test, stubs and dependency injections will be reverted automatically, Nock expectations cleaned up, etc. Had to write my own Chai plugin for expect(bot).to.send() . It’s more magical than I’d like, but it’s usable without knowledge of the underlying system.

When tests are easier to write, hopefully people will write more of them.

Wrapup

Your company’s chatbot is probably more important than you think. When things break, even the unimportant stuff like karma tracking, it can lead to dozens of distractions and minor frustrations across the team. Don’t make it a second-class citizen. It’s an app - write it like one.

While I may have preferred something like Lita, the Ruby chatbot, or just writing a raw Node / Elixir / COBOL app without the wrapping layer of Hubot, I’m making the best of it. Refactor, don’t rewrite. You can write terrible code in any language, and JS can certainly be clean and manageable if you’re willing to try.

I was running through tutorials for Elixir + Phoenix, and got to the part where forms start showing validation failures. Specifically this code, from Pragmatic Programmers’ “Programming Phoenix” book:

<%= form_for @changeset, user_path(@conn, :create), fn f -> %> 
  <%= if f.errors != [] do %>
    <div class="alert alert-danger">
      <p>Oops, something went wrong! Please check the errors below:</p>
      <ul>
        <%= for {attr, message} <- f.errors do %>
          <li><%= humanize(attr) %> <%= message %></li>
        <% end %> 
      </ul>
    </div>
  <!-- snip -->
<% end %>

I got this error: no function clause matching in Phoenix.HTML.Safe.Tuple.to_iodata/1

Couldn’t find a bloody solution anywhere. Took a long time to find IO.inspect . The message thing turned out to be a tuple that looked made for sprintf - something like {"name can't be longer than %{count}", count: 1} , so I spent forever trying to figure out if Elixir has sprintf , looked like there might be something in :io.format() , then I had to learn about Erlang bindings, but that wasn’t…

Ended up on #elixir-lang IRC channel, and the author of the book (Chris Mccord) pointed me to the Ecto “error helpers” in this upgrade guide. It’s a breaking change in Phoenix 1.1 + Ecto 2.0. The book is (and I imagine many tutorials are) for Phoenix 1.0.x , and I had installed the latest at 1.1.0 .

Major thanks to Chris - I have literally never had a question actually get answered on IRC. It was a last-resort measure, and it really says something about the Elixir community that someone helped me figure this out.

Stretchy is an ActiveRecord-esque query builder for Elasticsearch. It’s not stable yet (hence the <1.0 version number), and Elasticsearch has been moving so fast it’s hard to keep up. The major change in 2.0 was eliminating the separation between queries and filters, a major source of complexity for the poor gem.

For now my machine needs Elasticsearch 1.7 for regular app development. To update the gem for 2.0 2.1, I’d need to have both versions of Elasticsearch installed. While I could potentially do that by making a bunch of changes to the config files set up by Homebrew, I thought it would be better to just run the specs on a virtual machine and solve the “upgrade problem” indefinitely.

Docker looked great because I wanted to avoid machine setup as much as possible. I’ve used Vagrant before, but it has its own configuration steps beyond just “here’s the Dockerfile.” I already have boot2docker docker-machine installed and running for using the CodeClimate Platform™ beta, and I didn’t want to have multiple virtual machines running simultaneously, eating RAM and other resources. Here’s the setup:

  • Docker lets you run virtual machines inside “containers,” using a few different technologies similar to LXC
  • boot2docker docker-machine manages booting virtual machine instances which will run your Docker containers. I’m using it to keep one virtualbox machine around to run whatever containers I need at the moment
  • fig docker-compose lets you declare and link multiple containers in a docker-compose.yml file, so you don’t need to manually run all the Docker commands
  • The official quickstart guide for rails gives a good run-down of the tools and setup involved.

It’s a bit of tooling, but it really didn’t take long to get started; maybe an hour or two. Once I had it up and running for the project, I just modified the docker-compose.yml on my new branch. I had to do a bit of fiddling to get Compose to update the elasticsearch image from 1.7 to 2.1:

# modify the docker-compose.yml to update the image version, then:
docker-compose stop elastic
docker-compose pull elastic
docker-compose rm -f -v elastic
docker-compose run web rpsec # boom! builds the machines and runs specs

Once there, the specs started exploding and I was in business. Let the updates begin! After that, just a matter of pestering our CI provider to update their available versions of Elasticsearch so the badge on the repo will look all nice and stuff.

I wanted to try out basscss. Ended up changing the fonts, color scheme, and fixing syntax highlighting all over the place. Now using highlightjs, which looks like the only well-supported syntax highlighter than can guess which language your snippet is in.

This blog’s been around for 5 years now. It’s mostly thanks to Jekyllrb – whatever I want to change about the site, I can do it without having to migrate from one database or format to another. If I need to code something myself, I can do that with Ruby or Javascript or plain ‘ole shell scripts.

atevans.com has run off at least 3 different servers. It’s been on Github Pages, Heroku, Linode, and more. It will never be deleted because some blogging company with a great platform ran out of VC, or added some social dingus I didn’t want.

The git repo has been hosted four different places. When I got pwned earlier this year, the git repo was on the infected server. I just deleted it since my local copy had everything. Benefits of decentralized version control.

I had blogs all over the place before this, but this one has stuck around. I think even if Jekyll dies off somehow, a parser & renderer for a bunch of flat files should be easy in any language. I wonder what this will look like in another 5 years?


atevans.com in basscss

Web widgets are something everyone needs for their site. They’ve been done a hundred ways over the years, but essentially it’s some bundle of HTML, CSS, and JS. The problem is that there are so many ways of doing this badly. The worst is “semantic” classes and JS hooks. Semantic-ish markup is used in quick-start frameworks like Bootstrap and Materialize, and encouraged by some frontend devs as “best practices.”

Simaantec Makrpu

Semantic markup: semantic to who? It’s not like your end-users are gonna read this stuff. Google doesn’t care, outside of the element types. And it’s certainly not “semantic” for your fellow devs. Have a look at the example CodePen linked from this post.

This CodePen represents the html, css, and js for two sections of our Instagram-clone web site. We have a posts-index page and a post-page ; two separate pages on our site that both display a set of posts with an image and some controls using semantic patterns. Some notes on how it works:

  • Does the post class name do anything? It’s semantic, and tells us this section is a post. But that was probably obvious because the html is in _post.html.erb or post.jsx or something, so it’s not saying anything we didn’t already know.
  • What does a post look like? What styles might it have applied? Can’t tell from reading the html. Is it laid out using float: left or text-align or inline-block or flexbox ? I’d have to find the css and figure it out.
  • Once I do figure it out, I need to look even further to see what post featured changes. Is featured something that only applies to post, or can featured be applied to anything? If I make user featured will that have conflicts?
  • Why are post and featured at the same level of CSS specificity? Their rules will override each other based on an arcane system no one but a dedicated CSS engineer will understand.
  • The .post{img{}} pattern is a land mine for anyone else. What if I want to add an icon indicating file type for that post? It’s going to get all the styles of the post image on my icon, and I won’t know about these style conflicts until it looks weird in my browser. I’ll have to “inspect element” in the browser or grep for it in the CSS, and figure out how to extract or override them. What if I want to add a “fav” button to each post? I have to fix / override .post{button{}} . Who left this giant tangled mess I have to clean up?
  • Does .post have any javascript attached to it? From reading the html, I have no idea. I have to go hunting for that class name in the JS. Ah, it does - the “hide” behavior on any <button> inside the post markup. Again, the new “fav” button has to work around this.
    • What happens on the featured post? For the big post on your individual post page, a “hide” button doesn’t even make sense, so it’s not there. Why is the JS listening for it?
    • If I add my “fav” button, where do I put that JS? We’ll want that on both the featured and the regular post elements.
    • What if we want “hide” to do different things in each place? For example, it should get more images from the main feed on the posts-index page, but more images from the “related images” feed in the related-images section. Do we use different selector scoping? Data attributes? Copy + paste the JS with minor alterations? The more places we use this component, the more convoluted the logic here will get.

Two steps forward, three back

Okay, we can apply semantic class names to everything: hide-button , post-image , featured-post-image , etc. Bootstrap does it, so this must be a good idea, right? Well, no. We haven’t really solved anything, just kicked all these questions down another level. We still have no idea where CSS rules and JS behaviors are attached, and how they’re scoped is going to be even more of a tangled maze.

What we have here is spaghetti code. You have extremely tight coupling between what’s in your template, css, and js, so reusing any one of those is impossible without the others. If you make a new page, you have to take all of that baggage with you.

Solutions

In the rest of our code, we try to avoid tight coupling. We try to make modules in our systems small, with a single responsibility, and reusable. In Ruby we tend to favor composable systems. Why do we treat CSS and JS differently? CSS by its nature isn’t very conducive to this, and JS object systems are currently all over the place (function objects? ES6 Class ? Factories?). Still, if we’re trying to write moar gooderer code, we’ll have to do something different.

I’m not the first one to get annoyed by all this. Here’s Nicholas Gallagher from Twitter on how to handle these problems. Here’s Ethan Muller from Sparkbox on patterns he’s seen to get around them.

I’ve found a setup that I’m pretty happy with.

  1. Use visual class names as described by Ethan above.
    • Decouples CSS rules from markup content: an image-card is an image-card is an image-card
    • Makes grouping and extracting CSS rules easy - any element using the same CSS rules can use the same class names
  2. Name classes with BEM notation, and don’t use inheritence.
    • No squabbles over selector specificity
    • Relationships between your classes are clear: image-card__caption is clearly a child of image-card just from reading the markup
    • Prevents class name clobbering: image-card highlighted could be clobbered or messed up when someone else wants a highlighted class, but image-card image-card--highlighted won’t be
  3. Use .js-* classes as hooks for javascript, not semantic-ish class names.
    • Decouples JS from html structure - your drop-down keyboard navigator widget can work whether it’s on a <ul> , an <ol>, or any arbitrary set of elements, as long as they have .js-dropdown-keyboarderer and .js-dropdown-keyboarderer-item
    • Decouples JS from styles. Your .js-fav-button can be tiny on one screen and huge on another without CSS conflicts or overrides
    • Clearly indicates that this element has behavior specified in your JS - as soon as you read the markup, you know you have to consider the behaviors as well
    • data-* attributes have all the same advantages, but they are longer to type and about 85% slower to find (at least, on my desktop Chrome)

This was brought on by using Fortitude on the job, which has most of these solutions baked-in. It had a bit of a learning curve, but within a month or two I noticed how many of the problems and questions listed above simply didn’t come up. After using Bootstrap 3 for the previous year and running into every. single. one. multiple. times. I was ready for something new. I quickly fell in love.

The minute anyone decided to go against the conventions, developing on that part of the site got 10x harder. Reusing the partials and components with “semantic” markup was impossible - I had to split things up myself to move forward. Some components were even tied to specific pages! Clear as day: “do not re-use this, just copy+paste everything to your new page.”

I’d much rather be shipping cool stuff than decoupling systems that should never have been tightly bound in the first place.

I made a bloggity post for Elastic, the company behind Elasticsearch, Logstash and Kibana.

Our list page got 35% faster. Then we took it further: Angular was making over a dozen web requests to get counts of candidates in various buckets - individuals who are skilled in iOS or Node development, individuals who want to work in Los Angeles, etc. We dropped that to a single request and then combined it with the results request. From 13+ HTTP round-trips per search, we got down to one.

Chef seemed like a big hack when I first started with it. Chef “cookbooks” have “recipes” and there’s something called “kitchens” and “data bags” and servers are called “nodes.” Recipes seem like the important things - they define what setup will happen on your servers.

Recipes are ruby scripts written at the Kernel level, not reasonably contained in classes or modules like one would expect. You can include other recipes that define “lightweight resource providers” - neatly acronymed to the unpronouncable LWRP. These define helper methods also at the top level, and make them available to your recipe. What’s to keep one recipe’s methods from clobbering another’s? Nothing, as far as I can tell.

Recipes run with “attributes,” which can be specified in a number of different ways:

  • on the node itself: scoped for a specific box
  • inside a “role:” for every server of an arbitrary type
  • inside an “environment:” for overriding attributes on dev/stg/prd
  • from the recipe’s defaults

Last time I used Chef, attributes had to be defined in a JSON file - an unusual choice for Ruby, which usually goes with YAML. Now apparently there’s a Ruby DSL, which uses Hashies, which also appear to run at the Kernel level. I couldn’t get it to work in my setup. Chef munges these different levels together with something like inheritence - defaults get overridden in a seemingly sensible order. Unless you told them to override each other somewhere. Then whatever happens, happens.

“Data bags” are an arbitrary set of JSON objects. Or is the Ruby DSL supposed to work there, too? I dunno. Anyway, they store arbitrary data you can access from anywhere in any recipe, and who doesn’t love global state? They seem necessary for things like usernames and keys, so I can forgive some globalization.

This seems like a good enough structure / convention, until you start relying on external recipes. Chef has apparently adopted Berkshelf, a kind of Bundler for chef. You can browse available cookbooks at “the supermarket:” are you tired of the metaphors yet?

The problem here is that recipe names are not unique or consistent! I was using an rbenv recipe. But then I cloned my Chef repo on a new machine, ran berks install, and ended up with a totally different cookbook! I mean, what the hell guys? You can’t just pull the rug out like that. It’s rude.

Sure, I could vendor said recipes and store them with my repo. Like an animal. But we don’t do that with Bundler, because it seems like the absolute bloody least a package manager can do. Even Bower can handle that much, and basically all it does is clone repos from Github.

These cookbooks often operate in totally different ways. Many cookbooks include a recipe you can run with all the setup included; i’s dotted and t’s crossed. They install something like Postgres 9.3 from a package manager or source with a configuration specified in the munged-together attributes for your box. Others rely on stuff in data bags, and you have to specify a node name in the data bag attributes or something awful. Some cookbooks barely have any recipes and you have to write your own recipe using their LWRPs, even if attributes would be totally sensible.

Coming back to Chef a few months after doing my last server setup, it seems like they are trying to make progress: using a consistent Ruby DSL rather than JSON, making a package manager official, etc. But in the process it’s become even more of a nightmarish hack. The best practices keep shifting, and the cookbook maintainers aren’t keeping up. You can’t use any tutorials or guides more than a few months old - they’ll recommend outdated practices that will leave you more confused about the “right” way to do things. Examples include installing Berkshelf as a gem when it now requires the ChefDK, using Librarian-Chef despite adoption of Berkshelf, storing everything in data bags instead of attributes, etc, etc, etc.

Honestly, I’m just not feeling Chef any more. Alternatives like Ansible, Puppet, and even Fucking Shell Scripts are not exactly inspiring. Docker is not for system configuration, even though it kinda looks like it is. It’s for isolating an app environment, and configuring a sub-system for that. Maybe otto is the way to go? But damn, their config syntax is weirder than anything else I’ve seen so far.

I’m feeling pretty lost, overall.

Everything you know about html_safe is wrong.

As pointed out in the World of Rails Security talk at RailsConf this year, even the name is kind of crap. Calling .html_safe on some string sounds kind of like it would make said string safe to put in your HTML. In fact, it does the opposite.

Essentially, you need to ensure that every bit of user output is escaped. The defaults make things pretty safe: form inputs, links, etc. are all escaped by default. There are a few small holes, though.

Safe

  • link_to user_name, 'http://hired.com'
  • image_tag user_image, alt: user_image_title
  • HAML: .xs-block= user_text
  • ERB: <%= user_text %>

Not Safe

  • link_to user.name, user_entered_url
  • .flashbar= flash[:alert].html_safe # with, say, username included

Fight for the Future wrote an open letter to Salesforce/Heroku regarding their endorsement of the Cybersecurity Information Sharing Act (pdf link). The bill would, according to FFTF, leak personally identifying information to DHS, NSA, etc.

The first sentence of the letter bothered me, though:

I was disappointed to learn that Salesforce joined Apple, Microsoft, and other tech giants last week in endorsing the Cybersecurity Information Sharing Act of 2015 (CISA).

Apple is proud of their lack of knowledge about you. They encrypt a lot of things by default. They have a tendency to use random device identifiers instead of linking things to an online account, which is better security but causes annoying bugs and edge cases for users. Tim Cook has specifically touted privacy and encryption as advantages of using Apple devices and software. The FBI has given Apple flack for using good encryption, and there were rumors they would take Apple to court.

Has Apple reversed their stance? Are they lying to their customers? I haven’t seen them do that, ever. It would be really weird if they started now.

Oh, wait, they’re not:

Microsoft and Apple, two of the world’s largest software companies, did not directly endorse CISA. They – along with Adobe, Autodesk, IBM, Symantec, and others—signed the letter from BSA The Software Alliance generally encouraging the passage of data-sharing legislation. They also specifically praised four other bills, two of which focused on electronic communications privacy.

But who cares about the details, right? Get outraged! Get mad! Go the window, open it, stick your head out and yell: “I’m as mad as hell, and I’m not going to take this any more!

The second sentence of the letter is also problematic:

This legislation would grant blanket immunity for American companies to participate in government mass surveillance programs like PRISM…

This implies a conflation I’ve seen around the internet a lot: that Apple willingly and knowingly participated in an NSA data-harvesting program codenamed PRISM because Apple’s name appeared on one of the Snowden-leaked slides about the program. Also appearing: Google, Microsoft, Facebook, etc.

Apple responded that they did not participate knowingly or willingly. Google said the same thing. Microsoft spouted some weasel words; damage control as opposed to “what the fuck?!”

The NSA may have been using the OpenSSL “Heartbleed” bug for some or all of the data collection from these companies. Apple issued a patch for that bug with timing that subtly suggests it was in response to PRISM - pure speculation, but plausible.

Point is, if the three-letter agencies were using exploits like heartbleed, they wouldn’t tell Apple or Google. To all appearances, Apple and Google didn’t know anything about PRISM. The FFTF letter is making a weird insinuation that Apple, Google, and other companies would knowingly participate in such a scheme if the bill were passed.

I’m sick and tired of web sites, Twitter, news, etc telling me to be outraged. Virtually all of them reduce big, complex issues to sound bytes so we can get mad about them. I flat-out refuse to have any reaction (positive or negative) to anything “outrageous” I find on the internet, until I’ve done my own homework.

It started with TextMate when I first discovered Ruby on Rails in 2006 or so. TextMate went for ages without an update, Sublime Text was getting popular, and appeared to have mostly-complete compatibility with TextMate, so I switched.

Now Sublime has finally annoyed me. The Ruby and Haml packages just try too hard to be helpful, throwing brackets and indents around like there’s no tomorrow, often in places I don’t even want them. Time to try out Atom, especially since Github had a rather amusing video about it.

It takes quite a few packages to get up to the level I had Sublime at, but I think I’m basically there. Here’s my setup:

  • Sync Settings - back up your Atom settings to Gist. Here’s mine. Like dotfiles, these are meant to be shared. In Sublime this was a PITA involving symlinking things to Dropbox.
  • Sublime Word Navigation - nothing is more frustrating than having to hit alt+← twice just to get past a stupid dash.
  • Editorconfig - keep your coding style consistent.
  • Local Settings - I’ve wanted this in Sublime for ages. Simple things like max line length, soft wrap settings, and even package settings like “should RubyTest use rspec or zeus” on a per-project basis.
  • RubyTest - speaking of… Does everything I need from Sublime’s RubyTest, just had to re-map the keyboard shortcuts.
  • Pigments - shows css colors in the editor, and alternative to Sublime’s GutterColor.
  • Aligner - works way better than Sublime’s AlignTab package.
  • Git History - step through the history of any file.
  • Git Blame - shows the last committer for each line in the gutter. Unfortunately, the gutter is too small for many names, so it craps out and shows “min”. Also, the gutter can’t keep up with the main window’s scrolling, which is janky.
  • Git Plus - I still end up doing Git on the command line. This often didn’t support the stuff I need to do on a daily basis.
  • Language-haml - if you’re unfortunate enough to have to deal with HAML, this kinda helps. Like putting a band-aid on a bullet wound.
  • Rails Transporter - this is a nice idea, but it still doesn’t cover the functionality that Sublime’s RubyTest had. cmd+. would let you jump from a file to the spec file and back, and transporter just gives up if you’re in a namespace, form object, worker, etc.

How’s it working out? Well, Atom still feels a bit unpolished overall. Some of the packages above don’t work quite right, or aren’t as helpful as they advertise. And Atom’s auto-completion is annoying as bloody hell. It seems to use CTAGs or some variant, so it pulls in all symbols from everywhere, and the one I want is never even close to the top. And it pops up on every. single. thing. I. type. in a big flashy multi-colored box that randomly switches whether it’s above or below the cursor.

Finally, the quick-tab-switch is terrible compared to Sublime’s. It’s fuzzy matching is way worse, it ignores punctuation like underscores, and definitely maintains no concept of how “nearby” a file is, nor how recently I’ve opened it.

I might switch back.

Mastodon