Thursday, August 21, 2014

HTC One M7 Purple Camera Problem

The camera is one of the main reasons I purchased an HTC One (M7), with its awesome low-light performance, so I was really bummed when my pictures started getting a weird purple tint around the edges. The problem got progressively worse, to the point that the camera was essentially unusable:
So, I did some digging online and, while my case is abnormally serious, the purple picture problem seems to be a common issue for HTC Ones that were manufactured early on in their product run (I preordered mine and got it right at launch). No one is exactly sure what causes the problem, but people suspect it's related to heat, either in the phone itself or specifically at the "ultrapixel" camera sensor. Either way, replacing the camera module supposedly fixes it, and there have been reports of individuals sending their phones in for other warranty issues and getting them back with newer camera modules, whether they reported purple pics or not.

In my case, I went down to the local Sprint affiliate and inquired about a camera replacement. The tech started an insurance report and found that 'camera takes purple pictures' was one of the pre-defined claims, so he was able to order a refurbished phone for me on the spot at no cost (not even an insurance deductible!). A couple days later, I received my new phone and its camera works as well as I remembered. I'll update this post if it starts showing the same issues.

Friday, July 18, 2014

CRT-Royale and 3dfx Shaders

Two fairly new shaders have popped up that are worth mentioning: TroggleMonkey's CRT-Royale and leilei's 3dfx. They're both available in Cg format in libretro's common-shaders github repo, though CRT-Royale utilizes some advanced features that aren't available in RetroArch v1.0.0.2 (the most recent release at the time of this writing).

CRT-Royale is particularly exciting for me because TroggleMonkey managed to overcome some issues with shadow-mask emulation that I thought were totally intractable at current common resolutions (i.e., 1080p). The result is some really great phosphor emulation at reasonable scale factors, along with all of the bells and whistles users have come to expect from CRT shaders, including "halation"/glow, bob-deinterlacing support and curvature, along with a ton of options that are unique to this shader.

I'm not going to cover many of them here because it would take forever to get screenshots and there's not much point when TroggleMonkey has included a very informative README with the code, along with support for RetroArch's new runtime parameter support (so you can see the effect of your changes in real-time). However, I thought the shadow mask stuff was super-cool and deserved some closeups. Here's a shot of the shader with default settings (as always, click to embiggen):

First, we'll look at my favorite effect, the in-line shadow mask (called slot-mask in the code):
This is the same configuration I was shooting for with my PhosphorLUT shader, and you can see that the configuration of the phosphors has that familiar vertical, staggered orientation:
Next, we have the very similar aperture grille:
 
The main difference between this and the in-line slot mask is that it doesn't have the slight staggering (only really visible in the closeups and at super-huge resolutions). In closeup of the LUT, you can see that it just removes the crossbars between triads:
Last, we have the dot-triad shadow mask (called "shadow-mask-EDP" in the code), which was common on CRT computer monitors:
 As you can see, it looks very similar to the high-res shots I took of my Compaq CRT monitor (from my emulation/TV post). And here's the dot-triad blown up:

The other shader I wanted to show is leilei's 3dfx shader, which tries to mimic the effects of a 3dfx GPU, known for some distinctive dithering among other things. In addition to obvious applications like RetroArch's Quake core, Nintendo's N64 also used a GPU that was very similar to a 3dfx, which makes it appropriate for RA's Mupen64plus core. When run at low-ish internal resolutions and paired with RetroArch's per-texture 3-point filtering, you can get a pretty good approximation of what N64s looked like.

Here are some shots of the shader at 320x240 and 640x480 (i.e., native and double res, respectively):
Native res:
Double internal res:
 As you can see, the doubled res looks significantly sharper, but the scanlines are thinner and less pronounced (and twice as many of them) relative to the native res. I also like native res because it makes HUD/menu items look a little less "pasted-on":
Native res:
Doubled internal res:






Friday, June 20, 2014

True Hq2x Shader Comparison With xBR

Background
I was shocked to learn recently that the shader I and others have long called Hq2x is/was actually a misnamed port of the ScaleHQ algorithm! This misnaming goes all the way back to guest(r)'s shaders for ePSXe where the first instance of misnaming can be found. Regardless of whether this was simply an accident or a result of a misunderstanding of the differences between the algorithms, the misnamed version spread far and wide to the extent that no true shader port of the classic Hqnx algorithm--in any scale--existed in any language. EDIT: looks like there was an attempt back in 2005 but it never caught on because it was incomplete and had some bugs, but it's something.

Fast-forward to a few weeks ago, when a shader programmer named Armada brought this up in RetroArch's IRC channel. He shared some pics of the "Hq4x" shader in the common-shaders repo (which itself was based on the old XML shader in bsnes' gitorious repo, which in turn was based on guest(r)'s ePSXe shader) and an identical pic taken using a CPU-filter version of Hq4x. The differences were obvious and indisputable:
I started digging through old forum posts and it turns out that several people had noticed this over the years, but no one ever posted any comparison pics or were able to backup their suspicion in any way.

Armada and I did find that there was an old DOSBox renderer called OpenGL-Hq that was at least a step in the right direction, being a hardware-accelerated implementation of the Hqnx algorithm, and it was helpful insofar as it demonstrated how the algorithm can use an external lookup texture for the detection. However, it was not particularly applicable to modern shader language, so Armada set out to port directly from the CPU filter's C implementation.

After some intense work, which included creating a program to generate the LUTs that Hqnx bases its calculations on, Armada completed his shader port (also copied into the common-shaders repo) and it works beautifully! Incidentally, the requirement for LUTs means that a true Hqnx port wasn't even possible until fairly recently, as SSNES/RetroArch is/was the first emulator (that I know of, at least) to support LUTs in shaders.

Comparison Shots
If you've read over my previous comparison of Hyllian's xBR vs Hqnx, xBR won by a landslide in pretty much every comparison, which is no surprise because it wasn't really an apples-to-apples comparison. That in mind, here are updated pics that show a true comparison between the two algorithms (2xBR shader first, Hq2x shader right after and Hq2x CPU filter third):


The first thing you'll notice in those Super Mario World shots is that Hq2x does a great job of killing the jaggies. Much better, in fact, than ScaleHQ from the other comparison, and almost as good as xBR. There are a couple of rough edges (Yoshi's nose is a good example), but Hq2x is also *very* fast, so reasonable tradeoffs here. Hq2x does completely ignore the light texture blobs in the ground, leaving them as hard-edged rectangles, while xBR turns them into ovals. You can also see some slight lingering differences between the Hq2x CPU filter and the Hq2x shader, where the shader actually does a significantly better job of handling various detections.








On these digitized shots from Earthworm Jim 1 and 2, though, the comparison sort of falls apart. xBR is able to spot the jaggies and smooth them out while Hq2x doesn't spot any patterns at all, due to them being outside of its LUT's detection matrix.

It's also worth looking at how the algorithms differ in handling text. For this comparison, I included the two extremes of xBR's corner detection, with the 'a' variant as the most rounded and the 'c' variant as the most square:
 

In this comparison, Hq2x is essentially indistinguishable from xBR's 'c' variant, insofar as the text is concerned. The xBR 'a' variant is of course substantially more bubbly, which may be desirable for some games.

Conclusion
My previous comparison wasn't really a fair fight, and I apologize to Mr. Steppin for misrepresenting his algorithm. This is a much better comparison of the algorithms, and in ideal conditions, Hq2x is almost identical to xBR in smoothing while running much faster. However, in other cases--particularly digitized artwork--limitations in Hq2x's pattern detection can leave some images completely unsmoothed.

The speed of Hq2x makes it attractive for certain use-cases, such as mobile, where performance is still of the utmost importance and xBR either doesn't hit full speed at all or else drains your battery. xBR, on the other hand, can handle a greater variety of images and is more likely to produce a pleasing image with the digitized artwork that became more common in the PSX era.

Friday, May 9, 2014

Using X-Rite i1 Display Pro with Ubuntu Trusty 14.04

As mentioned in a previous post, I borrowed an X-Rite i1 Display Pro color calibration device from my job recently and have been calibrating every display I can get my hands on :P

This proved to be a bit of an issue on my various Ubuntu-powered machines, though, as the packages for dispcalGUI that are in the Trusty repos don't support the device yet! I discovered this when the program repeatedly failed with:
new_disprd() failed with 'Instrument Access Failed'
When trying to figure out what was going wrong, I opened up a terminal and ran the 'dispcal' program directly (that is, without the GUI) with the -D5 debugging switch, which gave a little more information:
init_inst returned 'Hardware Failure' (External EEPRrom checksum doesn't match)
new_disprd failed because init_inst failed
dispcal: Error - new_disprd() failed with 'Instrument Access Failed'
Evidently, the EEPRrom checksums can't be trusted and the newer version knows to ignore them, so go download the new version from the dispcalGUI website (I recommend scrolling down to the 'conventional installation' section, which has debs instead of fooling around with the default 'zero installation' business they point you toward).

With the new package, you should be able to use your device without any problems.

Thursday, May 8, 2014

Asus VG248QE Calibrated Color Profiles

Out of the box, the Asus VG248QE 144 Hz monitor has absolutely terrible color reproduction, but it doesn't have to be that way forever. I borrowed an X-Rite i1 Display Pro color calibration device from my job and used it to make some calibrated color profiles, which I'm happy to share with others who have this otherwise awesome monitor.

I had been using this color profile, which I found somewhere online (I don't remember where), but I made my own profiles for using 120 Hz refresh rate instead of 144 Hz, one for using a strobed backlight for anti-blur 60 Hz. and another calibrated at half brightness (not sure if this matters) for black-frame-insertion. For both of these profiles, my monitor had a built-in brightness setting of 24 and contrast of 53, just FYI. EDIT: I just set the monitor to the factory default 'standard' profile to keep things simple.

Also of potential interest, the dispcalGUI software suite includes a measurement of display latency, which was 15 ms for 120 Hz and 23 ms for 120 Hz with strobed backlight. I assume the increased latency was caused by the duration of the backlight blanking.

Saturday, April 26, 2014

Cx4 SNES Special Chip Image Download

Since byuu added low-level emulation support for the various special chips found in SNES games into bsnes/higan, anyone wanting to play those games has needed an image of the chips to go along with their ROMs. Most of the common SNES special chip images are available from caitsith2's SNES hardware page, except for the Cx4 image used in Mega Man X 2 and 3. Byuu is confident that the contents of the Cx4 image are non-copyrightable (it's just math tables) and hosted a copy of it on his own site for some time but that seems to be either down or missing since his various website overhauls, so I figured I'd host it here, from my mediafire account:
http://www.mediafire.com/download/9747o707ciq4lh8/cx4.rom

Tuesday, April 22, 2014

A Brief Look at x265 vs x264

The next-gen open-source video codec known as x265 (h.265 / HEVC) is starting to gain some traction, since it is now included in the popular FFmpeg utility and its various derivatives, so I figured I'd do some poking around and see how it compares with the ubiquitous x264 (h.264 / AVC). NOTE: x265 is still extremely new and unoptimized, while x264 is mature and stable, so this isn't really a fair fight just yet.

For this comparison, I downscaled a 720p video to 480p with both codecs. For x264, I used a CRF value of 21, high profile, 'medium' quality preset with a 'film' tuning (these settings were chose for simplicity rather than quality). For x265, I used an RF value of 21, 'medium' quality preset with 'ssim' tuning.

During the encoding, x264 used all 8 of my AMD Bulldozer cores at approximately 95% utilization, with an average encoding speed of ~75 fps. In comparison, x265 reached only approx. 65% CPU utilization, with an average encoding speed of just 12 fps.

As expected, the videos were comparable in quality, with a slight edge going to x264 (maybe related to differences in my settings, or maybe differences in the codecs, themselves; I'm not sure and didn't delve deep enough to find out), as you can see in these sample shots (click for full size, which is still pretty small):
x264
x265
x264
x265
However, underlying that slightly better quality, x264 used roughly 47% more bits per second to encode. That is, x265 had a final bitrate of 624 kbps vs x264's 919 kbps.

So, x265 already produces a significantly more efficient encoding than x264 but the processing power required for that encoding is much greater (roughly one-sixth the framerate). Moreover, decoding is all done on the CPU right now, as no GPUs have built-in decoder chips for h.265, so that means greater power usage and shorter battery life when viewing.

x265 is already impressive in efficiency but the drawbacks of increased computational costs in both encoding and decoding and drastically slower encoding speeds make it currently unattractive to me. I'll be keeping a close eye on development, though, since it's only a matter of time before the tradeoffs are corrected (improvements in the codec), negated (implementation of hardware encoding/decoding) or rendered moot (improvements in general computation power). After all, it wasn't so long ago that x264 was considered immature, unoptimized and poorly supported in hardware vs xvid / ASP. ;)

Analytics Tracking Footer