More recently for web content, I've been learning ReactJS. I have also watched the recent talks on React Native. All of these tools I find tend to serve two main purposes that the native frameworks do not:
- You can write the applications in a language you're familiar with
- Have more transferable code.
Obviously these vary from framework to framework. I'm sure we'll be using different components in iOS vs android environments with React Native.
I started talking with Dann about building native mac & windows apps with html+css+js and using NWJS, and how it almost feels wrong. Like we are clinging to the stack almost too much. No doubt it can be done, and can be done well. Atom is a good example of this. But I think there are limits.
launch camera plugin ->
take photo ->
That was for each of the four pictures, so around 12 button presses. That's no good. My tech lead managed to find a tutorial on doing a custom camera plugin. We figured might as well try it for a couple hours, and see if it's practical. So I got started on it from the tutorial. I added some minor changes to it, as it had a couple bugs. Once I could take a single photo, I modified the Objective-C code a little bit to add an array of photo urls. Applied the resize & grayscale, then passed the array to the webview. This only took a few hours of work total. If I were to just pass the raw images to the web view, I'd have to then using multiple async calls to the file system. Resizing them with the canvas and grayscaling them would be so much more work, especially due to memory limitations of the webview. The simple & synchronous code on the iOS side made it this process much easier.
Now granted, if this project was going on android, windows phone and other devices, it becomes a bigger task to maintain the code base. But I felt quite pleased on the results when building a feature in the native side. At least this way, we can still leverage our design team on HTML & CSS. I know from now on that with Cordova, if we need more than the plugins provide us, there is a way.
I brought these kind of examples to Dann's attention. He responded, which I'm paraphrasing "It is quite silly that we have to use different languages for these platforms. Android runs Java, and iOS runs Objective C/Swift, this just adds friction for developers that want to build applications. It's these greivances that causes us to explore solutions that can work for each device. We create a common API that invokes the proper specific calls based on the platform."
He's absolute right about that. Frameworks like Titanium and Cordova do go for the write once, run everywher approach. However, with the speed that android & iOS move at, it's hard to keep up with all the native APIs. With Titanium, you can't write native components, you're stuffed if their API doesn't do something you need it to. This could very well apply to React Native.
But of course, we still have platform detection code. In game code I sometimes see #if windows, #if osx type calls. We will write different versions of shaders depending on the OpenGL version supported. Older code we wrote platform specific stuff, with the cordova app I wrote platform specific stuff. We all need to be smart on when we use certain tools. At the end of the day, use the right tools to deliver the experience you want to deliver.