Why the Blackmagic Pocket Cinema Camera Is Not a Good Vlogging Cam

Apr 16, 2018

Blackmagicdesign Pocket Cinema Camera 4K

On NAB 2018 Blackmagicdesign announced a new camera slated for release in the Sept 2018 timeframe. It's the successor of the 2014 released Pocket Cinema Camera, the Pocket Cinema Camera 4K. Much like the successor it features a somewhat pocket sized form factor and a Micro Four Thirds mount. But unlike the old model it now features an actual MFT sized, 13-stop sensor with dual native ISO much like the GH5s. For all we know it could actually be the same sensor. It records Prores and Cinema DNG RAW to SD and CFast cards. It also features a large 5 inch, 1000 nits bright display on the back of the camera that should be great for focusing 4K footage.

That all sounds great and that's precisly why I preordered it. However there are some quirks that might turn you off. For one it features a record button at the front that's accessable with a thumb for when you're recording yourself. However the display is fixed on the back and the camera is kinda heavy with its projected 750g. To use it as a vlogging camera, you would actually need to mount another display on top that can be rotated to frame yourself during recording. The battery with a projected framing and recording time of 45 to 60 minutes is not that great, which means you also need to carry an extra battery pack. This makes the setup heavy and hard to carry around. On top of that the camera does not have a stabilized image sensor which will make handheld footage quite shaky. I also doubt that the camera fits on most lightweight gimbals like the Zhiyun Crane due to its increased horizontal size of 18 cm (7 inches, the Canon 1D X is about 16 cm).

Vlogging is all about recording yourself quickly whenever you feel like it, but all these downsides make it hard to use as an actual vlogging camera. The workflow of Cinema DNG is not ideal for a quick turnaround and image quality and cinematic look is not that important for a vlog. You're better served with a small and lightweight Sony or Canon camera with a flipout screen like the RX100 or the EOS M50.

The Blackmagicdesign Pocket Cinema Camera 4K is not a vlogging camera and not an action camera. Instead will be great for stationary setups like interviews, landscape and weddings and that's how you should use it.


ProRes RAW is Here

Apr 09, 2018

Apple just announced ProRes RAW, a new high-efficiency RAW codec that enables instant playback and editing without the need for any conversion. The performance of ProRes with the flexibility of RAW, that is what Apple promises. Apple created a codec that's supposed to be as easy to use as the existing ProRes options but with the flexibility of RAW. ProRes RAW allows to import, edit and grade video with RAW bayer data straight from the camera sensor without slowing down the edit.

There are two variations of ProRes RAW. ProRes RAW is the equivalent of ProRes 422 HQ in terms of data rate. ProRes RAW HQ is the equivalent to ProRes 4444XQ. The data rate of ProRes RAW HQ is just a fraction compared to uncompressed 12-bit RAW. Compression-related visible artifacts are very unlikely with Apple ProRes RAW, and extremely unlikely with Apple ProRes RAW HQ.

Multiple codec launch partners have announced support in their products. Atomos provides recording to ProRes RAW with the Shogun Inferno and the Sumo19 with the following cameras:

  • Canon C300 Mark II
  • Canon C500
  • Sony FS700
  • Sony FS5 / FS5 Mark II
  • Sony FS7/FS7 II
  • Panasonic Varicam LT
  • Panasonic EVA1

DJI will provide recording to ProRes RAW with their Zenmuse X7 Super35 camera, that can be attached to a gimbal or under an Inspire drone.

Unfortunately Blackmagicdesign has not announced support for ProRes RAW in any camera yet, but could happen later this year via firmware updates.

ProRes RAW editing is available with Final Cut Pro 10.4.1 and requires at least macOS 10.12.4.


Is the Nokia Steel HR Smart Watch an Apple Watch Killer?

Apr 03, 2018

Hero

Around 2 weeks ago, I saw an advertisement on a train station for the Nokia Steel HR smart watch and on the bottom it said “holds 25 days of battery charge”. It looked good, so I bought it to test it out. I immediately thought, that can’t be good. 2 days later the watch was on the mail and I immediately put it on. Then I realized the battery was dead. The watch comes with a USB charger adapter and uses 2 pins to charge the watch instead of induction charging. It took about an hour and the battery was at 100%. I put it on and frankly, I only took it off my wrist during shower times since.

The watch technology has been bought by Nokia from a company called Withings which also had a smart watch called Activité. I am guessing this new model is a somewhat successor of the same technology. The HR in the name stands for heart rate. The watch periodically measures your pulse and you can also operate it in a continuous heart rate mode during workouts. It also tracks your steps, your daily mileage, your calories burned, the current date and it offers a smart alarm feature that uses the integrated sleep tracking to wake you up when you’re not in a phase of deep sleep. And the best part is, that it shows the time… every time, because it features actual analog handles. The watch is connected to your iPhone or Android smartphone via bluetooth and once you open the “Health Mate” app, it synchronizes the health data to the phone. On the iPhone the app is also integrated with HealthKit.

I am really impressed. Not only does the watch look good with its circular watch face, it also works well. I couldn’t find any bug or issue with it so far. And the battery really holds a long time. After around one and a half weeks of 24 hour use, the watch still has 50% charge left. This small fact really frees you from bringing an extra charger on a trip. My only wish was, that the watch would display a second timezone. This would really help during travel or when working with oversees colleges.

Do I think it’s an Apple Watch killer? No, of course not, but I think it’s a contender to be worth checking out. The Apple Watch still has a lot more features and apps, although the usefulness is arguable. For sports and workouts the Apple Watch is and will remain the best watch, but for daily and casual use, I think I prefer the Nokia Steel HR. It offers a lot of the same benefits of the Apple Watch, but has a much higher convenience factor and is much more fashionable at least to my taste.


Workaround for Buggy DNG Handling in macOS

Mar 04, 2018

Yesterday I released a new version of my video post processing tool Colorcast. Among other things this new version includes support for Cinema DNG. Cinema DNG is an industry standard to store RAW video. Sometimes there are tools that convert the vendor RAW format of the camera to Cinema DNG and other times Cinema DNG is directly produced in camera, examples for this is the Blackmagic Cinema Camera lineup and the DJI X5R camera that can be mounted on a DJI Osmo or under a DJI Inspire drone.

During development of that feature, I had to deal with a number of bugs in Apple's implementation of DNG. There are configurations of DNG files that simply to not render correctly in Finder, QuickLook, Preview and even Photos. The reason for that is a lack of correctly handling those configurations in the underlying Apple API. I wasn't able to pinpoint the number of bugs exactly, but I think there are basically 2 main issues:

  1. Problem with handling bitdepths other than 14 bit RAW
  2. Problem with tiled RAW buffers

Buggy Tiling

The first problem causes 10 or 12 bit images to draw very dark. 12 bit 2 stops and 10 bit 4 stops darker as normal. The second problem just produces weird drawing issues, I can't really explain. The solution however is quite simple. You need to convert the RAW buffers to 14 bit and untile it. Sounds easier than done. You basically need to decode the whole DNG image yourself, convert the RAW buffers, convert some metadata like linearization tables and write out a new DNG image. And that is exactly what I have done. Around 2000 lines of code, just to get the images properly displayed. I hope that with macOS 10.14 Apple comes around and fixes those issues (Radars: 37538394, 31032063, 30754552).

Nevertheless, I found this conversion method so useful, I also created a small batch conversion application called DNGDoctor and put it into the Mac App Store. Check it out. And if you have a graphics application that deals with DNG images, and like to ship this fix in your app, just let me know.


Open-Source Objective-C API for Magic Lantern

Feb 06, 2018

Today I updated my currently in development app Colorcast to version 0.4 and included a RAW engine and added support for Magic Lantern video files. You can read more about this release here.

I also like to introduce you to an open-source Objective-C project and API (no Swift yet) for reading Magic Lantern video files and converting the Magic Lantern specific RAW format to DNG. Magic Lantern is an open-source project published under the GPL license. To be able to comply with the license and include support for it in a commercial product at the same time, I developed a binary that runs as a separate process and communication between the external process and the main app binary is done via inter-process communication. I published the project on GitHub. Please check it out, watch it, star it, fork it and spread the word. It would be great if other projects can also benefit from my work and thus spread the word about Magic Lantern, which in my opinion is a great project.

Included in this project as well is the beginning of a second macOS app solely dedicated to batch processing Magic Lantern files like converting to Cinema DNG, compressing or decompressing, etc. I'd be happy to find some collaborators.


Thoughts on the DJI Mavic Air

Jan 25, 2018

DJI Mavic Air

On Jan 23, 2018 DJI introduced a new drone to its consumer lineup, the DJI Mavic Air. Product-wise the Mavic Air sits between the smaller DJI Spark and the DJI Mavic Pro. The Air is in many ways a much better drone than both of those products.

While it has the same camera sensor as the Mavic Pro, it however records with 100 mb/s instead of 60 what the Pro does. This is a significant upgrade and produces videos with less compression and higher visible quality. Anybody who used the Mavic Pro before knows that especially with 4K, the image quality quickly falls apart.

It also looks that when folded up the Mavic Air is even smaller than the Spark, because the Spark doesn't have foldable legs. If price isn't any concern for you, you should not consider buying the Spark anymore. Apart from the price I don't see any upsides to the Spark. It is the much lesser product.

But the Mavic Air even makes the Mavic Pro a hard-sell now. Apart from the longer flight time, which can be offset by just having more batteries, I don't see any other advantages over the Air.

Even though the Mavic Air looks great and seems like an awesome drone, I am holding out for the update to DJI Mavic Pro sometime this spring or summer. My guess is that DJI will give it the better camera processing, 16 GB internal storage and the rear collision detection sensors, at least they should.


Why I bought a Sony Alpha 7 Mark II in 2018

Jan 24, 2018

Yes, it is 2018 and I bought a 3 year old camera new and here is why. I've always been a Canon shooter. I started around 15 years ago with a Canon anolog single-lens reflex (SLR) camera. I took it on a couple of trips and I quickly switched to a Canon EOS 350D 2 years later, because the ability to quickly check your images was priceless. This 8 megapixel camera served me very well for a couple of years and in 2015 I switched to the Canon 5D Mark III.

The Canon 5D series cameras are awesome and together with a great lens they produce amazing images. But especially in combination with a decent zoom lens such a device is heavy and you think twice of bringing it to your hiking trip to the mountains. The Sony mirroless cameras are much lighter, especially in combination with a landscape lens like the Sony Zeiss 35mm F2.8 which only weighs 120g. This lower weight makes it possible to carry more optional gear like a travel tripod.

I sometimes shoot pictures of my family especially in the evening time when there is not a lot of light. With the Canon 5D, I always had to use a flash. While that works, the images never looked very natural. With a Sony sensor, you get by with a lot less light and a higher ISO setting. In combination with a fast lens, you get amazing images without a flash even in low light situations. Additional the Sony Alpha 7 Mark II has electronic image stabilization built-in which helps with longer exposures in handheld shooting situations, although it doesn't help with fast moving kids.

Compared to the Sony Alpha 7R series of cameras, the Alpha 7 only shoots 24 megapixels, while the 7R shoots 42. But in my mind the 24 megapixels are plenty to work with and the 42 megapixels are kind of overkill and only useful for professional landscape photographers who want every little detail in an image. For any casual photographer like me, the 24 megapixels of the Sonly Alpha 7 is more than enough and it compares to the 5D Mark III in this regard.

Before purchasing the Alpha 7, I also checked out the Sony Alpha 6500, because it has most of the features of the Alpha 7, plus it can shoot video on 4K. However since I mostly wanted a camera for stills, I liked to have a fullframe sensor over an APS-C crop sensor and all the benefits this offers. For one you get a lot more shallow depth of field with a fullframe sensor than with a crop sensor and you have a lot more flexibility when it comes to lens selection. I also bought the Fotodiox Pro E-Mount to EF-Mount adapter with auto-focus support, which allows me to use all my old, but awesome Canon lenses. You don't get the full AF performance with an adapter, but in practice I found that it's good enough.

Last I want to mention the price. I got my Sony Alpha 7 for 1300 EUR of Amazon and Sony offered 200 EUR cashback. The above mentioned Alpha 7R is around 3000 EUR and you can get the Alpha 6500 for 1300 EUR. For me the Alpha 7 Mark II simply offered the best value for the price.


Using Metal Performance Shaders with Core Image

Jan 18, 2018

Core Image is a great Apple technology for manipulating images using the GPU. I am using it extensively in Colorcast, my video post-processing tool for macOS. With Colorcast you can quickly color correct and color grade video footage you've recorded with a video camera. Doing this it is very helpful to have something called video scopes. Video scopes let you see the image in a more analytic way. You can for example directly see if the image is correctly exposed or if the white balance and the skin tones are correct. There are multiple types of video scopes and some of them are already integrated in Colorcast.

Prior to version 0.3.1, these video scopes where calculated on the CPU, since with Core Image kernels you can only calculate image content on a per-pixel bases, which is not ideal for something like that, as I'll explain later in this post. But in macOS 10.12 and iOS 10.0, Apple added a special kernel type (CIImageProcessorKernel) to Core Image, which makes it possible to integrate Core Image with other image rendering technologies, like Metal Performance Shaders (MPS). Metal Performance Shaders offer a lot more flexibility than just plain Core Image kernels.

Let's take an RGB Parade as an example and explain what is needed to calculate an image like that.

RGB Parade

An RGB Parade is a video scope, that renders a waveform of the Red, the Green and the Blue channel side by side. Every pixel position in an image has an X and a Y coordinate associated with it. The waveform diagram projects the X position of the pixel on the x axis and the actual pixel value on the Y axis. The intensity of the diagram pixel hints at the overall count of pixels with that particular pixel value. Cinema5D has a good blog post that explains how to use these scopes. Since rending a pixel in the waveform involves counting the number of pixels that have Y as there particular value, you can see that doing this for every possible pixel is quite time consuming. An image with 512x512 pixels would need 512 times more time to render than any normal color filter.

And this is where Metal Performance Shaders come into play. You can pass an integer buffer to the metal shader, that has the same size as the image pixels.

kernel void
scope_waveform_compute(
    texture2d<float, access::sample> inTexture [[texture(0)]],
    volatile device atomic_uint* columnDataRed [[buffer(0)]],
    sampler wrapSampler [[sampler(0)]],
    uint2 gid [[thread_position_in_grid]]
)

For every pixel of the source image, you increase the integer at the correct position in the buffer by one, but make sure to do that atomically, since the shader function runs in parallel on the GPU for every pixel.

ushort w = inTexture.get_width();
ushort h = inTexture.get_height();
ushort hmax = h-1;
float4 srcPx  = inTexture.sample(wrapSampler, float2(gid));

ushort y = (ushort)(clamp(0.0, (float)((srcPx.r) * hmax), (float)hmax));
atomic_fetch_add_explicit(columnDataRed + ((y * w) + gid.x), 1, memory_order_relaxed);

In a second render pass, you take all those integer values and write the correct color information into the texture of the waveform diagram.

ushort w = inTexture.get_width();
ushort h = inTexture.get_height();
ushort hmax = h-1;

uint y = (uint)(clamp((float)(hmax-gid.y), 0.0, (float)hmax));
uint cid = (y * w) + gid.x;
uint red = atomic_load_explicit( columnDataRed + cid, memory_order_relaxed );
[...]
float4 out = float4(clamp(red / 5.0, 0.0, 1.0),
                    clamp(green / 5.0, 0.0, 1.0),
                    clamp(blue / 5.0, 0.0, 1.0),
                    1.0

This method only takes 2x the time of a normal color filter, which is not that bad.

The complete code on GitHub includes the Core Image filter, the CIImageProcessorKernel subclass that applies the shader to the Metal texture and the shader code itself. The Core Image filter can be used as any other filter. Make sure to create the image using a Metal texture and render the CIImage inside an MTKView subclass.


Night-Mode For Your iOS App

Jan 15, 2018

Night-Mode or Dark-Mode has been a favorite feature to request by users for all kinds of apps. The reason why is obvious. You use your phone in all kinds of situations day and night, and during the night your eyes are accustomed to less light. Looking at a glaring smartphone display can be very jarring and you might have to turn down the display brightness. But that’s not a good solution. Everything on the display just becomes more difficult to make out. With a Night-Mode or Dark-Mode feature however the color of the text and the background sort of becomes inverted and it’s much easier on the eye at night.

Since there is no dedicated night-mode setting in iOS (Android may have that), app makers reside to all kinds of techniques to switch the mode between day and night automatically. A method I often see is using the ambient light sensor. This method however can be very annoying in situations where it’s not light nor dark and often apps switch back and forth multiple times. When I added this feature to Instacast, I wanted to go a different route. I wanted Instacast to switch only 2 times a day, in the morning and in the evening. So, I came up with what I think was a unique method at the time.

Instacast is not measuring the actual light in the room. Instead it asks for your broad location data and depending on your actual location on the planet, it calculates when the sun is setting or rising approximately. Now you might say, that querying the location and triggering GPS and thus draining the battery only for switching user interface colors is wasteful, and you’re right. That’s why Instacast is only asking for your approximate location. This location is already stored by the phone when it is connected to a cell tower, so only already stored data is passed to the app. This way no actual sensor is fired at that time. After all, for the calculation of the sun angle, it doesn’t really matter if your 100 kilometres here or there. Also, please don’t look at the sun and wait for Instacast to switch the night mode setting exactly on time. The actual calculation is only an approximation. It does not account for terrain height or other physical circumstances, like the earth not really being sphere, but a potato.

So far this method was a huge success. Nobody ever wrote in and complained about it being unreliable or false. I’ve put the daylight calculator on Github if you like to use it in your app.


My Review of Gearo.de

Jan 09, 2018

When I started writing software for developing video footage last year I quickly had the need for hardware to test the software with. I admit it, I have GAS, so this was a welcome occation to give into my desires and buy some stuff. That was fun at first, but very quickly developed into a financial problem, of course. So I had to dial it back. I had to find a way to test a lot of equipment and not ruin me financially doing so. I searched around where and how to rent video and photo equipment. And that's were I came across Gearo.de.

Gearo is a German startup that operates in Germany and Austria. It works like Airbnb, but with photo and video equipment. If you have some equipment lying around at home, you can use this platform to start renting it to other people for one or more days. And if you like to check out a certain camera or a particlar lens without buying it, you can rent it from somebody in your local area.

I liked the idea and created an account. The setup process is quite painless and fast. Within minutes you can start renting stuff. Once you've decided for an item and a certain renting period, you request a rental. The owner of the gear then approves or denies the request. I had a couple of denies at first and started to get annoyed, but the Gearo staff is very kind and will help you find an alternative. So far I rented a Panasonic GH5 and a Sony Alpha 7s.

I also looked around in my closet and checked what I can rent out. You can check out my stuff on my profile page. There haven't been a lot of requests yet, but I already rented out my Zhiyun Crane v2 two times and earned about 60 EUR so far. Once you have an agreement with the equipment owner, you take the car and just pickup it up. The renting process on site is very quick. You basically make a picture of the gear, note the serial number and the id of the renter and both sign the contract on the gear owner's smartphone. So far it was a lot of fun for me. I met a lot of creative people with great and inspiring projects and ideas.

It's too bad that the service only operates in Germany and Austria so far, because I think that it's a great idea that connects creative and inspiring people around the world. Keep up the good work, Gearo!


Everything That's Wrong with Hackintosh

Dec 27, 2017

A week ago I wrote about my Hackintosh build. At first I was very enthusiastic about building my own computer. Everything looked good on paper. I did a lot of research on the interwebs and watched a lot of YouTube videos. Everybody was always bragging about performance and the low cost, but nobody was telling the whole story. This is my attempt to do just that.

Price

It's correct that on paper the sum of all components do cost less than a comparable Macintosh, but this does not account for the hours you have to invest in building the thing. It's not just skrewing all the hardware components together, it's also the installation of macOS and especially getting all the hardware components to work correctly. The whole build took me at least a full work-week. You do the math. If I would have worked for a client in that time, invested the income in Apple and substracted the rest value of my current iMac, I could have bought a new low-end iMac Pro.

Display

Another big price component that is never mentioned is a decent screen. There is really no good alternative to a 5K LG dispay at the moment. And a normal hackintosh is not able to run a display with 5K resolution. That has something to do with the Displayport spec and how 2 streams are multiplexed for 5K. It is theoretically possible to build it with a GC Alpine Bridge, a thunderbold PCI card that takes 2 displayport streams and multiplexes them into a thunderbold output stream. But judging from the forums, nobody has ever tried it before and the hardware is not easy to get. That means you are stuck with a 4K display and 4K displays for PCs do not have the 5K LG display's high resolution. I ran with the Dell UP2718Q UltraSharp 27 4K HDR Monitor, which costs around $1500. In terms of colorspace and brightness it's competitive to the LG Ultrafine displays, but it's expensive of course.

Graphics

One of the reasons for building a hackintosh for me was using a beefy graphics card. There are multiple options. You can go with Nvidia and need to install drivers and don't get good OpenCL performance (Final Cut Pro) or you go with AMD. I decided for AMD, because I thought that I don't need to fiddle around with drivers. In reality that's not true. Only a hand full of AMD graphics cards seem to work in High Sierra. The AMD Vega 64 works well, but runs the fans at light speed. In the end, I settled with a AMD RX 580 for now, which is the same graphics card that is running in a new high-end iMac. So that's that. I hope to upgrade to Vega 64 sometime in the future. I wasn't able to get the internal Intel GPU running. This process is really to cumbersome and involves a lot of kernel panics. No time for that.

Audio

Depending on what motherboard you use, you need to inject different kext modules. It's amazing that there are people who care enough to make this work. High Sierra however broke the support of a lot of audio drivers. I really couldn't get it to work after 8 hours of trying different drivers, version, etc. I ended up with attaching an M-Audio USB interface to the USB port. Works fine.

USB

Talking about USB. It mostly works for me, but reading through the forums, I get the impression that USB can also be a major source of headache. Cable-bound keyboards and mice do work, audio as well, but at first I wasn't able to mount external hard drives. There was a solution for that however and it took me a while to make it work, but external disks do mount now. However I can't keep them connected to the PC at all time, because at some point the machine just freezes, which is not the case when the external drives are not connected.

Bluetooth

Bluetooth also mostly works. I purchased the IOGEAR Bluetooth 4.0 USB Micro Adapter, because I read that this is the best solution. And yes, Magic Keyboard and Magic Mouse do work, also AirPods, however running external hard drives on the same USB bus can cut bandwidth from the bluetooth adapter and the mouse gets a noticable lag. This can go so far as a total shutdown of the bluetooth adapter. Disconnecting and reconnecting it again mostly workarounds that issue.

Hard drives

Disk space is a big upside and I can't really complain about that. I am using a Samsung 960 Pro on the M2 slot and 5 hard drives that are connected over SATA. And I love it. It's really nice to have a lot of disk space and not have a loud RAID on the desk. SATA is also working very reliably so far.

Conclusion

In summary, I would never recommend to anybody to build a hackintosh unless he has the time and energy to make it work. I can say, a hackintosh is not about the money, it's about the challange to make it work. If you need a machine for your professional work, get an iMac or an iMac Pro. Personally, I love to have a lot of disk space inside the machine. My hope is that Apple comes out with a new modular and upgradable Mac Pro in 2018 which makes me want to demote the hackintosh to run it Windows only.


RX Vega 64 Hackintosh for High-End Video Work

Dec 19, 2017

During my work on the Colorcast app, I was testing ProRes decoding into 16 bit Metal textures instead of the usual 8 bit Core Image workflow. After all if you have an expensive camera that can record ProRes in 10 or 12 bit, you don't want your color correction app to downgrade your footage to 8 bit and thus loose a lot of color information. From a development perspective, getting this to work with Apple APIs is not easy and involves developing custom Metal shaders.

Aside from that, the performance I was getting on my iMac 5k (late 2014) and on my MacBook Pro (mid 2015) was not great. Playback didn't come any close to realtime. I first attributed that to the slow graphics chips Apple is integrating into their lineup traditionally, but more testing revealed that the actual bottleneck is I/O bandwidth and RAM speed. It was clear to me that my iMac was simply not suited for 4k color editing. It was time for a new beefy machine. After some research later I narrowed it down to 2 options, either get the new 2017 iMac 5K or the new iMac Pro. I don't know anybody that owns a 2017 iMac 5k and the iMac Pro isn't out yet. So I wasn't able to test performance on an actual machine and get a better understanding of what is really needed here.

After comparing hardware specs and configuring it out in the Apple Online Store, it really came down to the price. A new iMac 5K would have cost me 3600 EUR. The new iMac Pro with a 1TB SSD option would likely have cost me 5800 EUR. After seeing these prices, I also configured a true high-end PC that contains a mix of both models. This machine came down to 2300 EUR excluding the display. If you add a really high-end 10bit HDR display you end up with 3700 EUR for the PC, however the PC contains an RX Vega 64 graphics card (11 tflops) and the iMac comes with a Radeon Pro 580 (6 tflops).

Very quickly I ruled out the new iMac Pro. It is way too expensive and really overkill for my purposes aside from the fact that it's a new design and who knows what's wrong with it and its thermal footprint. So the decision came down to a new iMac or the PC. I did a lot of research into the Hackintosh thing and it seemed to me that it's possible to build a really great PC and run macOS on it and get a GPU performance that is unheard of in Macs (said but true). So I figured the risk was worth it.

Here's a complete list of all components I used for the build:

  • Gigabyte GA-Z270X-UD3 motherboard
  • Intel Core i7 7700K 4.20GHz processor
  • Gigabyte Radeon RX Vega 64 graphics card
  • Samsung SSD 960 PRO Series NVMe 1TB
  • Corsair AX Series AX760 power supply
  • 32GB Crucial Ballistix Sport LT DDR4-2400 DIMM RAM
  • Noctua NH-D15 Tower cpu fan
  • TP-LINK Archer T9E AC1900 WLAN Dual Band PCI-E Adapter
  • IOGEAR gbu521 W6 Bluetooth 4.0 USB
  • Corsair Carbide Quiet 400Q case

components

That's a lot of stuff and you have to assemble everything youself. In my case I must say that my last PC build was over 20 years ago and I really didn't have any idea what I was doing. However after reading through the manuals and watching a couple of YouTube videos, I managed to assemble the thing in about 3 hours (PC guys are now rolling their eyes). Turning it on for the first time and not starting a fire was a really great feeling. The system installation process is fairly simple, at least with the hardware configuration above.

Here's quick rundown of the installation process:

  • Download macOS 10.13.2 from the App Store.
  • Copy the installer onto a USB thumb drive using a special tool inside the installer package.
  • Download the latest Unibeast from the tonymacx86.com website.
  • Use Unibeast to add the EFI bootloader to the USB thumb drive.
  • Setup your BIOS correctly.
  • Start the macOS Installer from the bootable USB thumb drive.
  • Format your hard drive using Disk Utility.
  • Continue installing macOS High Sierra.

And that's pretty much it. There are a couple of details you need to know for troubleshooting (like preventing APFS conversion, FakeSMC, EFI mounting, etc.), but writing all this down is too much for this blog post and shouldn't be necessary anyway with this build. The Hackintosh booted without any problems.

The graphics card is working out-of-the-box with caveats. Apple is adding support for the RX Vega 64 right now and the driver is improving with every macOS release. Prior to 10.13.2 it head some OpenGL bugs, which are fixed now. However the GPU fans are still spinning at full speed and the PC is restarting occasionally when using the GPU extensively like with the Unigine Valley benchmark. 9to5mac had similar issues with the Vega 64 mounted inside an external Thunderbold enclosure.

I also could not get the on-board audio to work unfortunately. Sound preferences is displaying audio ports, but neither input nor output audio works. I managed to get it working through a simple USB thumb audio interface. Wifi using the TP-Link PCI card also works flawlessly. There is really no need to install any additional kexts.

I am quite happy with it so far and I hope the remaining GPU issues get fixed with the next macOS updates. I have yet to test bluetooth, but from what I read it shouldn't be a problem to get Handover and AirDrop to work (at least as bad as on the Mac). I also still have to test Colorcast, my color correction app on the new hardware. If you like to get updated on my hackintosh build, you can subscribe to my updates on Twitter.

UPDATE Jan 26, 2018:
The AMD RX Vega 64 is working on 10.13.4 Beta and is silent.


Vemedio Product Development Has Been Discontinued

Jun 01, 2015

Due to economical reasons and to concentrate on new projects, I had to close my company Vemedio and discontinue all products.

Since Jun 2016 there is a new version of Instacast available on the App Store called Instacast Core. It has been stripped of all cloud related features, since the servers are not running anymore.

You can ask questions on Twitter. Licenses of all applications can be activated indefinitely.


Audio Books vs. Podcasts

Apr 16, 2015

Prompted by suggestions that I should read the new Steve Jobs book, I looked at the options and decided to listen to the audio book instead. So, I check the iTunes Store and purchased the book. It downloaded 2 files. Then I started listening to it via the built-in Music app. After 2 days of using the audio book option of the app, I am already fed up with it. The user experience is really bad. I can't image that there are actual people that use this. It seems like an afterthought and that there is nobody at Apple that looked at it for the last 6 years. There are constant problems with remembering the playback position. Scrubbing 8 hour audio files is cumbersome and imprecise. There is no support for chapter markers and bookmarks. But then all these problems got me thing about the medium.

Why are we using a dedicated podcast app for podcasts and a music app for audio books? That doesn't make sense to me. After all audio books are just longer podcasts. To me the medium is the same. Both contain spoken words you can't listen to in one run. You basically want to do the same things. All the problems and issue we see with consuming podcasts are actually amplified when listening to audio books. Audio books are even longer audio files and even less accessible. So I thought about better ways to listen to audio books.

I split the audio book into multiple files - one file per chapter. Then I created a podcast feed and added one episode for every chapter. This method allowed me to much better keep track of how my progress into the book was. Also podcast apps generally do not loose track of the current playback position. It is clear to me now that using a podcast like app for audio books presents a much better user experience than what is offered right now.

Of course podcast apps could also be made better for these use cases. I am thinking of something similar like 'News Mode'. I'd call it 'Story Mode'. If enabled a podcast app should reverse the sort order to 'Oldest First' and always keep the next episode downloaded. You'd start at episode #1 and the podcast app already downloads episode #2. You finish episode #1 and start playing episode #2 and the podcast app auto-downloads episode #3, etc. This feature would also be great when you want to re-listen to older podcasts starting with episode #1.

Of course audio books are not delivered as podcast feeds and I won't be doing this manually for every audio book I'll listen to. I am also not saying that they should be. All I'm saying is that we as podcast app developers should not only think about podcasts, we should also think about longer forms of audio content like audio books and audio magazines and how we can make that user experience better.


Rejected for Weak Linked Frameworks

Mar 21, 2015

I got an interesting Mac App Store rejection tonight I haven't seen anywhere else. My app submission was rejected to §2.2 - Apps that exhibit bugs will be rejected - for missing frameworks, which I weak linked into the application, but stripped out using a custom build phase in Xcode.

The scenario here is as follows: The app is already available for purchase outside the App Store and includes the Sparkle framework for dynamically updating the application binary. This is not necessary for Mac App Store distribution of course, so I removed all the Sparkle code from the app. Because I did not want to add a new Xcode target only for another build configuration, I set the Sparkle framework in the "Linked Frameworks" table to "optional".

xcode

Since weak linking frameworks and not shipping them seems not to be allowed (nothing in the App Submission Guidelines about that of course), you can't use this "Linked Frameworks" table in mixed build environments. What I'll do instead is create two Xcode Config files and manually hard link the necessary frameworks using the OTHER_LDFLAGS option.

Of course it would be nice if XCode could warn you about this issue during submission validation, so that you don't have to wait 6 days for Apple to tell you that.


Ideas for A Better App Store

Mar 19, 2015

Listening through Inquisitive #31 - Behind the App #5: App Review with Myke Hurley, I had some ideas about how to make the App Store much better, more sustainable for developers and less cluttered with crapware as it is today.

It's clear that the App Store as it is right now has a couple of fundamental problems, just to name a few:

  • App Review takes 7 days or more
  • App Review makes questionable decisions
  • App Store is filled up with crap software
  • App prices are not sustainable
  • App discovery is really hard

To address all these problems, it's clear that some mild adjustments are not good enough anymore. The App Store needs a fundamental change. The old Downloads section on Apple's website for Mac apps was in fact working much better for developers and was helping consumers to find great apps, because Apple tightly curated this section and only allowed higher quality apps.

It should be said that you can't really reject all the apps in the App Store, since the App Store is the only mechanism to distribute software on iOS. You need to have a basic mechanism for consumers to install any app as long as the app does not violate the law or contains serious security issues. But you also want to curate the App Store much better and hide all the crap. I am proposing a 2-level system to improve all that.

The 1st level would be, that you submit your app to the App Store and you get an immediate download link (like the direct iTunes link). You can start to distribute this link to your customers by means of your website, newsletter or whatever. Every app is approved initially and goes through a legal and security check within the next couple of days after which it could be taken down upon having issues, if the developer did not comply within a certain period of time. When installing using the direct link, the system would bring up a warning that the app has not be verified by App Store editorial, similar to what Gatekeeper tells you when you download a Mac app from the web.

The 2nd level would be to apply with your app for an addition to the App Store store front. This application could be treated much more thoroughly than it is today, because Apple wouldn't be under anti-trust pressure if they did not allow your app into the store front, since you can always distribute your app using the direct link. The effect would be that the App Store would not be littered with all kinds of crapware. It also would increase customer confidence installing apps from the App Store and I imagine that it would also raise prices of those app that would make it into the store front.

I think this 2-level system of App Store review would be much better than what we're facing today, both for the consumer and for the developer. And Apple would also benefit, since it increases the quality of the App Store product. This system would also increase the incentives for developers to make better apps, since they want to be added to the store front for maximum customer attention.

There are a lot of similar ideas out there, but at this point I have no hope of change anymore. The only thing we can do is keep talking about it.


Auto-Layout works

Mar 05, 2015

Working with Auto-Layout fulltime for 2 months assured me that the technology is actually working. You can use it exclusively instead of autoresizing now. I am glad to be able to say that about an Apple technology. That is not always the case these days.

Sometimes you have to use a trick or two, but the results are worth it (example: Instead of using setFrame: for dynamically changing the bounds of a view, you should change the constant of the corresponding layout constraint. The welcoming side effect: you can animate that!)

Be sure however to set view.translatesAutoresizingMaskIntoConstraints to NO. Otherwise you might end up with lots of conflicts. The translation of autoresize masks is a bit useless, if you ask me. Either you use Auto-Layout or you don't.