submarine - open source service that takes an email and returns a key

Cloud-based services require that you give up all of the private details of your life in order to gain access to your favorite software. It’s basically an accepted fact that once you start using an online service you may as well post the data you provided to them on your front door, because it’s out there now and it’s never coming back.

For the most part, I think your average person is fine with quietly ignoring this. Afterall, the benefit of using Snapchat far outweighs the cost of Snapchat’s employees, the NSA, and any potential hackers knowing who you sext on a regular basis.

There are strong-minded people, however, that are wary of the SaaS model because they don’t want to give up their precious data. YNAB, one of my favorite companies, is finally modernizing its software by taking it into the proverbial cloud. I was stunned to see how many of YNAB’s current users were disgusted by this decision, begging that desktop-only versions continue to be supported. I suspect these sentiments will get more and more common as high-profile data breaches occur more often.

This, along with some conversations with friends about bettrnet‘s data, sparked me to rethink the SaaS data-storage model.

Let’s use YNAB’s new cloud service as an example. It’s a relatively simple budgeting software that requires a few key inputs (e.g., your income, your budget allocations, your expenditures, etc.). YNAB The Software needs your data to do its job, but YNAB The Company does not need your data to do its job. If it were up to The Company, they might wish with all their hearts that they did not have to take, store, and protect your data so that you could use The Software that they build and you love. It’s a hassle to them, and more than that, a liability.

(Note: I am well-aware that most companies see your data as an asset that they can learn from or sell—my hypothesis is that as we go into the future many companies will see the risk of having your data, and the cost customers incur by being forced to give up data, as an expense as much as an asset).

The Company really would like to make the users’ of The Software feel in control of their own data. Sure it wants to use the data, but it also wants as many customers as possible, including the ones that would prefer a desktop application. In an ideal world, users might provide YNAB with access to a personal database server; The Software would connect to each individual users’ database server to store that user’s data. If ever the user decides they no longer trust The Company then all they have to do is revoke access to their personal database.

This idea isn’t entirely practical, however. So, instead, we are working on a web service to give that same power to users without requiring SaaS companies to completely alter the way they store your data.

fight for the user

submarine is an open-source project that does one simple job: take a user, return a random key. When you sign up for YNAB The Software, YNAB The Software makes a request to submarine for a unique key on your behalf. submarine creates a unique key and stores it in its own database, completely separate from your YNAB data. submarine also notifies you via email about this key, and gives you access to manage it.

YNAB The Softare uses the key that it has been given to encrypt your data before storing it in its database. Whenever you log back in to YNAB The Software to look at your budget, YNAB The Software asks submarine for your key. It uses the key to decrypt your data and do its YNAB magic, thus making you a happy customer.

Six months later, let’s say you no longer trust YNAB The Company with your data. You go to submarine and remove the key that was created for your YNAB data—that data is now as good as deleted.

Any prying eyes from employees at YNAB The Company, or any analysts at the NSA, or any anonymous folks at Anonymous, who sneak a peek on YNAB The Software’s data will forever see something like this: �/���L��в" ���sZ�\r��:t��Ob*���Y@. YNAB locked up your data and you threw away the key—it will never be unlocked again.

Now, there are some obvious hurdles to this plan. For one, YNAB The Company has to be extremely committed to making submarine work in its infrastructure. Certainly we don’t envision that submarine will be used to encrypt all of users’ data—but perhaps a company could use it as an extra layer of security for some particularly sensitive aspects of its data—maybe another company can use submarine to assure its users that their data is stored in an anonymized way.

It’s my belief that companies selling software services should (and will) give more power and responsiblity of data back to the users’ to which it belongs. submarine‘s goal is to offer a service to help service builders make that decision. Checkout submarine on github to contribute and try out one of the sample applications that uses submarine to protect its users’ data.


Generating iOS Library from Google Cloud Endpoints for Swift App

At bettrnet, we just started working on a brand new iOS app to be the main conduit to our great service. As part of this, we’ve had to hack our way through some project bootstrapping issues that were less than seamless. One of these was getting our Swift app to comunicate to our Google App Engine API.

If you are using similar technologies, bless you for trying this. I wish you luck in your endeavor. For us it was a huge PITA, but we finally worked through it and have been able to move on to more interesting aspects of bettrnet’s fancy new app.

As a disclaimer, this may not be as hard as I make it out to be here. (1) we’re in a weird place in our bettrnet development because we recently went through an identity crisis and then reignited our fiery passion to make a bettr net. (2) Swift 2.0 just came out—but Objective C is still very much a thing. We plan to write our bettrnet app entirely in Swift, but that doesn’t mean we get to leave Objective-C in the dark basement where it belongs (yet). Anyways, because of this transition there are some missing pieces from the documentation and tooling provided by Google to generate the libraries you need. (3) those of us that were working on this task are not iOS experts, at all. bettrnet’s one true iOS developer was working on something a little more exciting than bootstrapping a new project.

Unfortunately, because I don’t completely understand every step we took to get this working, I can’t write a comprehensive tutorial of the Step 1 do this, Step 2 do that, Step 3 perform blood sacrifice to Objective-C gods variety because I really don’t know what steps we followed. It was a lot of Googling and trying things to see what stuck. I mostly just want to document what resources we looked at while trying stuff, and if you look at the same resources and try stuff then it should eventually work for you as well.

(If you’ve actually written an iOS app before this might all be dead simple to you. Good for you. Go collect your 108K and mind your business.)

Step 1 do this: Generate the discovery document for your API

You need to create the JSON document that describes your API (what endpoints exist, what parameters are accepted, etc). This is used to generate the Objective-C client libraries. There are actually several ways to do this and I think it’s fairly well documented by Google (also this). The simplest way is to cd your/api_project and run mvn appengine:endpoints_get_client_lib. This assumes your Google App Engine project is Java based and you are using mvn. If you aren’t… well consider switching, or figure out how to generate the document/s using one of Google’s other methods.

Once the documents are generated, they will be placed in a folder like your/api_project/target/generated-sources/appengine-endpoints/WEB-INF. There should be one (or several, depending on how your API is structured) .discovery files here. Take note of this location, it’s going to be useful later.

Step 2 do that: Build the XCode Project that Builds the Objective-C libraries that interface with your API

(Ugh. I hate having to compile a utility’s source code to use it. Just let me download the executable.)

Follow Google’s steps here to open the ServiceGenerator project and build it. This program will create Objective-C files based on your .discovery files. Google makes it sound like building this project is going to be easy as pie. If it works for you right off the bat, then celebrate and move on with your life. If you have the same experience I did, then you may see some confusing error messages.

In hindsight, the error messages are not that confusing… but, they definitely aren’t that clear.

The problem I got basically said something like, “can’t find such-and-such file in /Users/me/place/gtm_session_fetcher/Source/such-and-such-file.m. Something like that. This blew my mind. Of course you can’t find that damn file, it doesn’t exist. Why would it exist? It’s outside of this project.

Whatever, the solution ended up being to just create that folder and put the files it wants in it. All of the files exist in the google-api-objectivec-client-read-only svn repo that you cloned, it’s just a matter of copying the required ones into the folder XCode expects to find them. Once that is done, it should build pretty easily.

Then you can go back to Google’s documentation (linked above) and use the ServiceGenerator to create Objective-C files for your discovery documents. Yes!!!

Step 3 perform blood sacrifice to Objective-C gods: Add required Google client library

As documented here, you have to add some required files to your project before you can add your client library. Start by following the steps Google gives you, it should get you almost all the way there.

We ran into an issue with the Networking files not compiling correctly (i.e., 'GTMSessionFetcher.h' file not found). We ended up having to actually alter a few of Google’s files. I think this has something to do with Google transitioning from old iOS version to new iOS versions. Based on the conversation in that thread, we commented, removed, or otherwise mangled the GTMSession… files Google provides until they compiled. It wasn’t straight-forward, but I honestly don’t remember everything we did. If you are stuck on this and need help, let me know and I’ll send you our files. I think once you make the change, the GTLNetworking_Sources.m will actually need to be compiled with a compile flag of -fobjc-arc, instead of -fno-objc-arc like the documentation says.

Eventually, you should end up having added GTLNetworking_Sources.m and GTLCommon_Sources.m to your project. You don’t need to worry as much about actually adding all the other files to your project, because these files pull those ones in. Add these to the list of Compile Sources on your project’s Build Phases so it looks like this: compile sources.

The project should build at this point (Command+B).

Step 4 your almost there: Add your Objective-C files

Now that you have the required files added it’s time to add your Objective-C files. You need to add them to your Swift XCode project and do some black magic to make sure (1) they compile with the project and (2) you can use them in your Swift code.

Drag all the files into your project. XCode should prompt you about how you want to handle this. adding files What worked for me was to choose Copy items if needed, Create groups, and uncheck all targets. If you have a target selected, it’s going to add all of these files to the Build Phase Compile Sources thing to be compiled, which is not what you want.

After they get added to the project, you need to add at least two files to the list of Compile Sources. For each service you are adding, you need to make sure the GTLServiceName.h and GTLServiceName_Sources.m files are set to be compiled adding files. These files pull in all your other service files. If you add both these and your other files to the compile sources, you’ll get a linker compile error or something and be very annoyed.

Your project should build now.

If it builds, rejoice; your service still cannot be used in your Swift project though. For that, you need to add (or add to) your Objective-C bridging header file. This file is a special Objective-C file that makes other Objective-C libraries visible to your Swift files.

You just need to add import statements for your services to this: #import "GTLServiceName.h". That’s it! Now crack open a Swift file in the project and try typing let serviceServiceName = GTLServiceName(). That should build just fine.

Now you can really rejoice, for you have successfully generated your iOS client library and added it to your Swift project.

Step 5 Use your library

You may not be entirely clear on how to use this service just yet though—especially considering we haven’t setup OAuth yet, which I’m assuming you’re going to want.

Rather than explain anymore on something I obviously don’t understand that well, I’m going to point you to a very useful StackOverflow discussion. The answer here shows an example of using Swift to make service calls, complete with OAuth.

In Ryan’s example, he is using the Google Drive service with OAuth. The thing is, your GTLServiceName object is almost the exact same thing as the GTLServiceDrive object Ryan is using. So his example will map very closely to using your own service.

Google has good examples here as well on how to use the service.

Conclusion

So, obviously we didn’t understand or document this process very well, but we hacked our way through it. If someone else finds this useful, then that is fantastic! If anyone else has some additional tips on how to make this a little less painful, we’d love to hear about that.


My experience with the HoloLens

I had the opportunity to go to one of Microsoft’s HoloLens demos today and I’m really excited that I got to try out this awesome piece of technology. You should know that my personal hype level for HoloLens was very high going in to this demo, so I was going in there with the intention of thoroughly loving the experience.

The Demo

They ushered me into a mock living room where they told me I would be playing Project Xray, the “first untethered, mixed-reality first-person shooter.” This game was demoed at Build, although I played a simplified version of it that did not involve any handheld controller. It took me a second to figure out how to put it on, but after you do it once this process should be extremely easy. The device itself was very lightweight and felt sturdily attached to my head, which was good because I would have to be dodging fireballs in a moment.

The game served its purpose to show off the technology, but was actually surprisingly fun to play as well. You start by spinning in a circle and scanning the room, which was pretty cool because you can see the HoloLens detecing the walls and laying down wireframe over them. After starting the game, a friendly robot-alien crash lands through the wall and tells you some hocus-pocus about how some bad aliens are coming and he had to combine with me so that I could kill them blah blah blah who cares lets shoot some aliens. He flies into you and now you have a reticle on your screen that implies you can shoot stuff now.

Then the aliens come. They might break through any of the walls, so using a combination of your hearing skills and your following-on-screen-arrows-that-point-exactly-where-to-look skills you have to find the cracks in the walls. A big tube eventually pops through and aliens fly/crawl out of it. Aiming at the aliens was extremely intuitive because you just have to look at them. The reticle follows your eye movement and you fire the trigger when it’s on them. I worried about this being sort of weird for some reason, but while I was doing it I didn’t even have to think about so that must be a good sign. Shooting at the aliens involved the “air tap” that you’ve seen on some of the other videos. People must have been having a hard time with this because they made me practice it like 10 times. It worked fine for me though (practice makes perfect), except for having to remember to put my hand out far enough in front of me that the cameras could pick it up once or twice.

The aliens shoot stuff at you as well, all of which you have to avoid. This makes the game pretty exciting because you are ducking and stepping all over the room. I can imagine playing this in a larger room where you might get to hide behind stuff being pretty dang cool. Different types of aliens that do different attacks keep coming out of different walls at you, and eventually you get a powerup called “X-ray”. Say “X-ray” and time slows down and you can see where the aliens are behind the walls. At least, that’s what is supposed to happen, but I’m not sure that I ever actually got that to work. Maybe I wasn’t polite enough or confident enough when asking HoloLens to turn X-ray on, or maybe the UI change just wasn’t strong enough for me to notice.

Eventually a boss alien comes out and you kick its butt and you see your score on one of the walls and that’s it. The HoloLens team member made me feel all special by telling me I had a “good score”, probably “third best all day”. This feeling, of course, was ruined completely when I found out some other jerk in my group got 5000 more points than me. Jerk.

They then let us sit down with some of the “Developer Experience” developers to ask questions. I had a lot of questions in my head to ask that I of course forgot all about when it came to the Q&A time. I was there at the end of the day, and it did honestly feel like at least a couple members of the three person team weren’t that excited to answer our questions.

Reaction

In a lot of ways, it lived up to my high expectations and hopes. The device was extremely comfortable and lightweight. In fact, as soon as I got into the game I almost completely forgot I was wearing it. There was also no trouble figuring out where to focus like you might experience with the Occulus Rift.

The holograms interacted very well with reality, not jumping around inappropriately or coming out of thin air instead of the wall. I got really sucked into the action and it really feels like stuff is crashing through the walls at you (with one caveat, see below).

I didn’t get to try out a ton of gestures, but the “air tap” worked like a charm. This is one aspect that has me worried about the HoloLens, because controlling interfaces with gestures is sometimes intuitive on paper but then extremely difficult in practice. (I’m looking at you LeapMotion). The LeapMotion has incredible tracking of your hand movements, but without tactile feedback I would say LeapMotion is still very difficult to use for most things. Someone in my group asked about touching holograms to select them instead of relying on the “gaze” feature, to which one of the developers responded more-or-less by implying “How would you know that you touched it?” I think this is the right answer. I think the technology probably supports it, but without tactile feedback to let me know I’ve touched something I would rather select it through other means, so this is reassuring to me. As part of this discussion though he mentioned that if you were working in a hybrid app that utilized a traditional monitor as well as HoloLens it’s possible to use your mouse. You just move your mouse to the edge of the screen and it pops out and keeps going—this idea is extremely attractive to me.

The game was very fun (for a demo) and easy to play. One aspect of video games the HoloLens actually might struggle with, however, is exploring new places. In a traditional first-person shooter I get to experience new worlds as I progress through the game. If all games are similar to Project X-Ray then at the end of the day you are still staring at your same 4 walls, even if there are some holograms poking out of them.

The HoloLens runs Windows 10, which means of course that it can run any Universal Windows App, but should theoretically also mean that it can run anything a normal Windows PC can. So, if I want to completely escape from my monitor and spread my Visual Studio, Chrome, and Spotify windows out across my wall as I’m developing then I think I should have the freedom to do that. They told me that’s not how it works though :sad_face:. I kind of got the impression that they couldn’t understand why you would want to run a non-Universal Windows App on the HoloLens. This left me feeling a little “meh”, as the HoloLens really will be only as useful as the apps it supports, and could not completely replace your PC.

The one (and really only) glaringly huge flaw in the HoloLens is the field-of-vision. It. Is. Small. Surprisingly small. The videos imply that you will have an entirely immersive experience, but the actual experience is very much hindered by the fact that the holograms only show up in a small square directly in front. You will only see objects that are directly in front of you.

Oddly enough, the demo seemed tailored to highlight this shortcoming. The very first instance of the game is your alien buddy crash landing somewhere in front you. He makes a wide path from your upper-left to your lower-right. If you don’t know exactly where to look, then you’ll miss part of his path. I scanned across and saw the trail of his path, which disappeared and reappeared. So, right away my first thought with the HoloLens is a let down about not being completely immersed. If he had just popped through the wall in a single spot, this thought would have been delayed until later when I may have had a chance to get into the zone a bit more.

Even later, though, the issue persists. The aliens pop out of the wall and fly a couple of feet to the left or right, as soon as this happens you can no longer see them. You have to physically turn your head to adjust the field-of-vision and center the alien in front of you, which causes you to lose sight of the hole in the wall. This means aliens might be almost right in front of you shooting at you but you would have no idea. The magical experience of HoloLens ultimately ends up being lost in the fact that everything is still happening in a small rectangular screen, the screen is just very close to your face.

To be fair, this is still a developer preview. It’s a few steps above a prototype, but it is not the consumer version yet. I am 100% sure that Microsoft is aware of this criticism because when we brought it up to the developers, one of them got kind of defensive about it. “Well, did that break the experience?” Uhh, yes Vlad, it did actually break the experience because (1) I was constantly reminded that I am looking at everything through a screen and (2) I could not see all my enemies at one time. Doesn’t mean it was not really cool, it just wasn’t completely immersive like you imagine. This device is definitely good enough for people to start developing games with, but I think HoloLens will never be able to live up to developers’ imaginations unless the field-of-vision is expanded.

Without much 3d experience, and without a HoloLens, I’ve been wondering how I can prepare to develop for it in the off chance that I get the opportunity to buy one next year and I convince the Mrs. that $3,000 is lame compared to the opportunity of being an early adopter. Based on my conversation with the developers, it seems that the answer is to just start developing stuff in Unity. You can write native apps for HoloLens, but they have done a lot of work to make using Unity be a good experience, so for 3D development that is the recommended solution. This is good news, because it is a tried and true platform that we can start developing on now. I’m sure converting a project to work on the HoloLens won’t be entirely seamless but in theory it should not be too horrible.

My overall impression is that I’m still excited about HoloLens’ potential. Concerns I had about the holograms not looking or behaving quite right were completely quenched. I’m a little bummed that I can’t use HoloLens as my main PC, but the field-of-vision issue is now my only major concern. Hopefully, this is something that they can fix soon (although don’t expect it to be fixed for the developer preview next year).

TL;DR:

  • HoloLens was very cool
  • The developer preview model is good enough to start developing apps on
  • The field-of-vision is too small
  • HoloLens runs Windows 10 but apparently does not run the full Windows 10 experience

Image Progress Indicator for CSS Dummies such as myself

I pretty much hate CSS. It’s black magic that’s hard to debug and almost never does what I expect it to. I think Bootstrap and other such frameworks are incredibly helpful. That being said, sometimes you just have to cowboyorgirl up and write some custom CSS to get the job done.

It’s a pretty common usecase to want to show a progress indicator of some cute graphic being revealed towards some goal. For example, you are raising money for an ice cream party, and you want to reveal an ice cream bar as you get closer to your goal. Or maybe you have a game with a finite number of points and you want the user to see what portion of a trophy they have earned.

Below is just such a game. The rules of the game are that you click the button and you get a random amount of points. Go ahead and play it for the next 20 minutes to fully understand and enjoy the demo.

See the Pen progress by Troy Shields (@troylelandshields) on CodePen.

Read More



AI will destroy us

AI

Artificial Intelligence has made some big splashes in the media lately. People’s interest tends to be piqued when you have geniuses like Elon Musk saying we are “summoning the demon” with our pursuit of AI. Musk makes it sound like right now at this very moment there is some laboratory in the depths of Silicon Valley with a bunch of nerds sacrificing old computers in an attempt to raise iFrankenstein. It probably is not this dramatic… (not yet at least, give it another 10 - 15 years).

Read More


The key to rapid development is to stop and write a test

Several months ago Doug Leonard, Colton Shields, and I started bringing an idea from Doug’s mind into your reality. bettrnet.com was born (and has since been reborn a few times and surgically altered and is currently in the process of going through a painfully awkward puberty before it will likely be burned and rebirthed again like the beautiful phoenix we dream it to be).

At the time, the three of us collectively decided it was a bettr (hold the ‘e’ from now on) idea to prioritize getting a working product up and running over “wasting” time writing tests. The name of the game was rapid development and deployment.

I now wish I could go back and kick ourselves in the shins for making this decision. Here’s why:

Read More


what must Microsoft do for the HoloLens to succeed?

The question to think about is why does iPhone destroy Windows phone in the market? There are a number of reasons, but price, hardware, and UI aren’t main factors in my opinion. The ultimate reason, if you trace it back, is the availability of apps. Which, interestingly, is part of the reason Microsoft made a killing in the 90s over Mac computers.

Read More