When using the multview / docking feature of dear imgui (availible on the main repo in the docking branch) dear imgui will use desktop relative coordinates instead of window relative. This causing the rendering to get "offseted" if not handled correctly.
Before this change the rendering matrix wasn't used at all and this has now been changed in the vertex shader.
Notice this change is fully backwards compatible with the relative windows as well as the upper left corner will always be set to 0,0 in that case thus this change will work with both versions.
I also added some changes to skip rendering if not needed (based on the the other backend implementations for dear imgui such as the OpenGL one)
* display bokeh sample pattern, add bokeh shape, improve look
draw sample pattern to texture and display in ui to see number of samples and their arrangment
add bokeh shape controls
remove adhoc 'sqrt' pattern since display makes existing pattern easier to understand and it looks nicer.
switch to floating point color texture and leave lighting results in linear space until after dof is performed. provides better results and bright spots can make more noticeable bokeh shapes.
change default values to use take more samples at reduced resolution so initial experience when loading the sample is better looking image
* update screenshot, minor change to ui
fix height of ui element so scrollbar not required by default layout
update screenshot
* fix typo in texturev
atleast, i'm pretty sure that's a typo don't see a reason to set width twice
- Fix Android Building
- "entry_android.cpp:157:29: error: no member named 'kErrorRederWriterEof' in namespace 'bx'; did you mean 'kErrorReaderWriterEof'?"
* Implement bokeh depth of field
Implement bokeh depth of field as described in the blog post here:
https://blog.tuxedolabs.com/2018/05/04/bokeh-depth-of-field-in-single-pass.html
Additionally, implement the optimizations discussed in the closing paragraph. Apply the effect in multiple passes. Calculate the circle of confusion and store in the alpha channel while downsampling the image. Then compute depth of field at this lower res, storing sample size in alpha. Then composite the blurred image, based on the sample size. Compositing the lower res like this can lead to blocky edges where there's a depth discontinuity and the blur is just enough. May be an area to improve on.
Provide an alternate means of determining radius of current sample when blurring. I find the blog post's sample pattern to be difficult to directly reason about. It is not obvious, given the parameters, how many samples will be taken. And it can be very many samples. Though the results are good. The 'sqrt' pattern chosen here looks alright and allows for the number of samples to be set directly. If you are going to use this in a project, may be worth exploring additional sample patterns. And certainly update the shader to remove the pattern choice from inside the sample loop.
* fix typo in shader of denoise example
copy/paste error, applying y offset to x component instead