Today is the last day of the retune conference I’m visiting. After resonate in Belgrade earlier this year, this is only my second of these arts/technology conferences. As someone who visited quite a bunch of tech conferences, I couldn’t help but feel a little alienated by both conferences, which made me think a little more about why this is. In the following few paragraphs, I’ll try to sum up my findings.
Before that, let me say that I don’t want this to be understood as a critique of sorts. Different events are different and people seem to genuinely enjoy both types, so please take these as observations. While I do react negatively to some of these observations (as will be clear in the text), I don’t see my own experience as in any way representative for visitors of these events. With this out of the way, here we go:
1. Presentation quality
With a few notable exceptions the presentation quality at both resonate and retune were quite low in contrast to most tech conferences. This manifests itself in many ways - For example, quite a few speakers were actually struggling a lot with the english language, to the point of me not being able to follow a speaker because of a heavy accent and broken flow. I’ve seen tech speakers struggle with english, but not on this scale. But not only that, some speakers weren’t even bothering to prepare slides, erratically clicking around between either browser windows or even just files on their desktop. Maybe they thought that their work stands for itself and they couldn’t be bothered to prepare better, but as I’m used to (mostly) precisely prepared, crafted and rehearsed talks of tech people, I feel annoyed by this. It feels like negligence to me.
Apart from that, many presentations violated simple preso 101 rules like using only strong contrasts, use big fonts (which was especially important, because the venue was quite long and so people looked at the screen from quite a distance.
Also, many of the presentations of retune had the tendency to be more “tell, don’t show”, than the other way round, with too few examples and too much theoretical talking - about motivations, which is fine, but also about processes which would have been much better shown in examples.
There’s also a weird tangent to this: It feels to me as if the arts are way more into brining their work onto an academic level of sorts, something that only very seldom happens with tech talks.
2. Organisation and tech
I’ll just explain this with a symptomatic weirdness from retune: Speakers are forced to use hand mics, while after the talk, one of the organizers takes a body mic for a walk through the audience for asking questions. You might laugh about this, but it is symptomatic for the way the whole technology part was treated. (Maybe I’m wrong and there was a good reason for that weird setup. I’ll try to find out). But it is a bit weird for a scene that should have a lot of know how on audio technology. At resonate, the acoustics in some of the rooms were so bad that you only could understand the speakers if you were sitting really close to them. Sometimes, it took more than 10 minutes to set up the speakers laptop because nobody was there to help the speaker.
3. Catering and Venue
Both resonate and retune had no free catering. That’s not a problem as such and is also reflected in the ticket prices (also, I suspect that a conference like retune has a lot more problems to find sponsors at the moment than most tech events) - But somehow it was a bit weird to then having to pay 2,50 EUR for a bottle of water. Catering was priced okay, I guess, I didn’t even tried it. All in all, i liked the venues of the resonate, more or less, while the retune venue was way too dark for my liking, and every speaker was making jokes about how she/he was not able to see the audience. This artificial and probably unintentional divide has been countered by the very nice discussion format which allowed for longer interactions with the speakers afterwards.
Did I enjoy retune or did I stay true to this grumpy self?
This all might read as if I didn’t enjoy myself and that would be wrong. I’ve seen some great talks and performances, have been inspired (arguably, this effect was bigger at resonate, which also could be partially attributed to exploring a new city (Belgrade) at the same time) and got to know a few very lovely people. So there. I know where I feel at home, but of course it’s important to leave your cozy home at time to get stimulated. Bot retune and resonate are able to do that, if you’re interested in the intersection between arts and technology.
EDIT: I’ve corrected a few typos and also added a few paragraphs to better explain some things.
I see what he’s getting at: Slides (and my talks usually fall into this category) are usually not worth a lot on their own. Nevertheless, I usually publish them, because they are at least very useful to the people I’ve been talking to: They contain a lot of links, image credits, and maybe they even help you remember stuff I was saying during the talk.
That being said, having a video of a talk, while still incomplete, is a much better way to document a talk. I nowadays often try to also write a blog post on the talk which is a much better format if you need text. (Which, in essence is what he wants you to do instead.)
In the end, I think publishing slides does serve a purpose, even if that purpose usually is not to work as a substitute for people who didn’t see the talk. Am I missing something here?
Whoah. What’s happening here? And then it hits you: There’s a Dollar sign in the original string. Suddenly, the export command makes sense, as Shells usually interpolate variables. But $e2jd is (hopefully) not a defined variable, so it gets cut out.
But what about Dotenv?
Turns out, A relatively large part of the relatively small Dotenv codebase does all kinds of substitutions to properly mimick shell behaviour. And that makes sense, of course.
The fix is easy: Add a backslash and escape the $-Sign. Backslashes are always the solution.
UPDATE: As Mailgun people on various channels (See comments) mentioned, they’ve changed their API key format a while back to fix this, I just had an old API key there. If you’re having this problem: Regenerating the API key should fix it. Thanks for the quick response to Mailgun!
For some reason, after updating an App to Rails 4.1.5, Heroku stopped serving static assets.
It didn’t really cross my mind to look out for the X-Sendfile header in the empty response I got, and so we spent a whole lot of time trying out other things, cursing at Heroku. After almost giving up, suddenly the X in that curl response stood out.
Looking at the environment configuration, I looked at x_sendfile_header, and surely it was set. As this project was hosted on Heroku for as long as any team member could think, I tried to find the change that introduced the x_sendfile_header config but gave up after a while.
The only reason, that it didn’t bite us before was that only after the linked commit Rails started to use X-Sendfile for assets.
It’s not that I resent that change. It doesn’t REALLY make sense, as you would usually be able to serve assets directly with a webserver if you have a webserver that understands X-Sendfile, but that’s none of my business.
I have no idea why the sendfile config was in there. But this goes to show that even small changes, none of which were made in error, can ruin two developers’ day.
When I’m out of options on what to cook for lunch (which usually is on the quick side of things anyway), I usually have some Pasta left somewhere in my drawers. Pasta sauces can be a lengthy affair (If you have never done that, you should try a real homemade bolognese sugo, which ideally cooks for a pretty long time), but this one is really quick, and I usually do them freestyle. I tried to remember what I did this time, because it was extremely tasty:
Put a generous amount of olive oil into a small pan on medium heat. The ingredients are supposed to swim in oil. Remember: Olive oil = good fat.
Add Finely chopped garlic to taste (A good rule of thumb is one clove per person). I messed this up and only chopped it into slices, which works, but small pieces is better.
Add dried chilies finely chopped or broken, also to taste. As this is largely dependent on your taste AND the potential of your chili, I can’t seriously give you any recommendations here
Let it simmer for some time, so that the garlic is cooked, but not brown
Add a tea spoon or two of tomato paste. It won’t easily dissolve, but that will be fixed later
Add some dried herbs. I used home grown oregano, I assume basil would work as well
When the pasta is done, add 2-3 table spoons of the pasta water before straining the pasta, this will dissolve the paste and bind the whole thing together. Add a bit of salt.
If possible, pour the strained pasta directly into the pan and give it a good stir so that the sauce is well distributed.
serve, add hard cheese to taste if you must
Have some yoghurt ready for eventual dosage errors on the chili side.
A trip to IKEA ended with me getting a new version of their DIODER color LED contraption. It’s basically a strip (or, in the new version a puck like structure) with a load of RGB LEDs that are driven by a small and simple PIC based controller unit that lets you either set a color by using a dial or choose between two different ways of cycling through ALL THE COLORS!!!
It’s a fun toy, but since the circuit is so simple (it’s basically power supply, microcontroler plus controls and the LED drivers), it’s also very easy to hack. The end game, of course, is to address each light source (Both DIODER sets contain four of them) individually, but for that you would need a dedicated driver chip. But what’s easy is to remove the PIC microcontroller and the pot, attach an Arduino to the three channels and the PIC power supply and Bam! you can drive the DIODER set with the PWM channels of the Arduino.
The next step was to add the Ethernet shield and to drive the LEDs via UDP. My version of the Ethernet shield unfortunately doesn’t play nice with the DIODER power suppy, so currently I need to have the USB connected, but hey.
I devised the simplest UDP protocol I could come up with and wrote a little ruby script to send some packets.
Next, I used the rosc gem to let the ruby script open up an OSC server. A little test with TouchOSC and a small, three slider control surface was already pretty cool, but as I saw that latency was pretty much non existent with this setup, I wanted to go further.
So I built a little drum pattern with Ableton Live, added an additional MIDI track and painted some rhythmic controller envelopes on CTRLCHG 1, 2 and 3. I then (first iteration) used OSCulator, a pretty cool tool to basically convert every control signal to every control signal to send OSC to my script. In the second iteration, I’ve at least eliminated OSCulator by finally learning me some Max 4 Live.
The effect of RGB strips and pucks flashing synchronously to the music is pretty mesmerizing, I have to say.
I’ve not been able to research this in depth so far, but it seems as if mobile Safari on iOS7, under unclear circumstances, caches responses even if they are non OK responses.
This is clearly wrong and broke our nginx-auth_request based login system: It renders a 403 error with a (mozilla persona) login button after authentication failed - After login, the user is redirected to the original address and this should trigger a full reload (which would then pass authentication and render the real web page). In our case, it just loaded the login page from cache, which, due to persona’s behaviour, would trigger the login dance, which would redirect to the original address which…you get the idea.
Fixable by setting no-cache headers when rendering out the 403 page, but still pretty annoying.
In Preperation of my Talk at the Railswaycon 2012, I hacked together something I was thinking of for a long time. What if you could let your server render out complete pages (for sake of searchability, for example) and then initialize a backbone collection from that HTML? Your backbone app could then augment the statically rendered page with it’s own magic.
Here’s a gist. I’m going to publish it including examples and tests later on.
I say: F*CK YOU APPLE!!!. The Web Audio API has been available as a webkit patch for something that feels like at least 3 years and is (correct me if I’m wrong) available in stable Chrome for at least a year, or even one and a half (it has been present in beta versions for at least two). Firefox and Google (and even Opera) are innovating the hell out of the web platform and all that Apple does is fixing the most blatant security issues.
So, Googles “Install a modern browser” may be a bit rude, or maybe even cheeky, but it’s not completely unfounded. This all wouldn’t be much of a problem as long as we only talk about desktop browsers because nobody uses Safari there anyway (Market share is really small), but then there’s the whole mobile space with the iPad clearly dominating the space and Apple not allowing other browser engines onto the platform.
This situation actively hurts the web - As much as I love my iPad and and as much as Apple did in the first place to make mobile browsing bearable and popular - But this needs to change.