madVR重大更新
本帖最后由 cylx 于 2016-11-28 12:59 编辑Update:
0.91.2版本NGU算法预设增加到4个档次,即low,med,high,veryhigh,现在新的NGU-low已经比JincAR和superxbr更快,所以删掉了superxbr for image doubling。
madVR v0.91.2
* renamed NGU quality levels: Low -> Med, Med -> High, High -> VeryHigh
* added a new even faster NGU "Low" variant
* reworked chroma/image up/downscaling/doubling settings pages
* removed NEDI and super-xbr image doubling algorithms
* small speed improvement for NGU-Med (former NGU-Low)
* small quality improvement for NGU-Med/High (former NGU-Low/Med)
* settings dialog warns when SuperRes and NGU are enabled at the same time
* pixel shader database is compressed now to save space
新的upscale算法NGU,目测质量堪比waifu2x,性能的话,1080P24→4K,medium quality大概相当于nnedi128,high quality我的RX480已跪{:4_677:}
开发者madshi的一些说明:
http://forum.doom9.org/showthread.php?p=1786133#post1786133
A few comments about NGU:
1) Obviously, I've decided to change the name from "NG1" to "NGU". The reason is that "NG1" didn't have any mention of "upscaling" or "super resolution" in the name. So I added the "U" for "upscaling".
2) In this initial version only 2 quality levels (high and medium) are available. A future version will add at least one more level which is faster than medium, maybe there will be even more levels, I don't know yet.
3) NGU is currently available for both doubling and chroma upscaling. The chroma upscaling algo does currently not make use of the luma channel. I plan to try adding that to a future NGU version, but that will take serious development time, so don't expect it too soon.
4) NGU doesn't need OpenCL, DirectCompute or D3D11. It runs fine with D3D9, so it should run fine in XP.
5) A word about "ideal upscaling": When trying to upscale a low-res image, it's possible to get the edges very sharp and very near to the "groundtruth" (the original high-res image the low-res image was created from). However, texture detail which is lost during downscaling cannot properly be restored. This can lead to "cartoon" type images when upscaling by large factors with full sharpness, because the edges will be very sharp, but there's no texture detail. In order to soften this problem, I've added options to "soften edges" and "add grain". Here's a little comparison to show the effect of these options:
low-res image -- | -- straight NGU -- | -- NGU + "Soften Edges" + "Add Grain" -- | -- Jinc AR
I hope these extra options will make NGU acceptable for all of you, including those who preferred Jinc so far.
Now I'd like your feedback:
1) Do we still need NNEDI3 image doubling? Or can NGU completely replace it?
2) Do we still need SuperRes (image doubling)? Or can NGU completely replace it?
3) Do we still need NEDI image doubling?
I'll not ask about super-xbr image doubling yet because I've not scaled NGU down to that performance level yet.
本帖最后由 BallanceHZ 于 2016-11-19 17:18 编辑
1080P的屏1070的卡,chroma upscaling和double luma都用NGU high 看720P(VCB的全金)还扛得住,顺带开个double chroma NGU high就跪了 所以是不要更新吗 然而我和很多人一样遇到了Ctrl+J以及NGU无法启动的问题··········· 经过我大量(也就花了一个小时)测试。得出结论,开了NGU就不需要开image doubling了。
首先因为正好在看Log Horizon,于是就直接拿这部番进行测试。
以下是比较 凭感觉是我的教条。
我这边是将1080P源拉伸到4k,不为什么我屏幕就是4k 。
Before:
NNEDI3 256 + Jinc AR + NNEDI3 64 luma 2x doubling + NNEDI3 64 Chroma doubling (~45fps)
middle:NGU High + Jinc AR + NGU mid & NGU mid (~25fps)
After:
NGU High + Jinc AR + NNEDI3 64 Luma 2x doubling & NNEDI3 32 Chroma doubling (~45fps)
(后来自己加上了这两东西 sharpen edge 1 + add grain 1)
首先说一下就是,这NGU感觉是基于神经网络。为什么,因为拉出来和waifu很像。
比较下来,NGU清楚很多,但是感觉有点糊。虽然没有waifu那么糊。不过实际的效果因人而异。
然后就是计算量的问题,我感觉1080P的屏幕下其实保证输出没有被污染就行,说多了都是玄学。你们不要想着超采样然后缩回去,这样还不如直接买个大屏幕舒服。
对于大屏幕来说,2k的话就开始吃配置了,4k要爆炸。
给出的意见是,开了NGU可以关掉image doubling;主要是为了获取较高的帧数,如果帧数超过60fps的话,可以考虑开启等级较低的image doubling 或者superres。
还有就是关于madvr的帧数问题,大家可以打开Ctrl+J来查看渲染时间。低于55ms都是可以接受的,低于30ms就是流畅。这个渲染时间和帧数的关系,我这边的情况是:50ms => 30fps30ms => 45fps。
最后补充一点,全屏独占是个好东西,能输出10bit。
720P 24fps的视频,1060以下的卡洗洗睡吧
1080P 24fps的视频,1070以下的卡洗洗睡吧
这算法比较极端,比较挑源。片源越好,效果越好;片源画面越差,对瑕疵的强化也越厉害。
玩得转另说;玩的不转的人呢,老老实实用nnedi3做image double拉伸luma就行了。 首先说一下就是,这NGU感觉是基于神经网络。为什么,因为拉出来和waifu很像。虽然结论很可能是对的,但……神经网络和这个效果并没有任何理论上的直接关联吧。
低于55ms都是可以接受的,低于30ms就是流畅。这个渲染时间和帧数的关系,我这边的情况是:50ms => 30fps30ms => 45fps。这不对吧……1000/24 = 41.67 ms,也就是说 24 fps 的片源渲染时间必须小于 41.67 ms 才能不丢帧;同理 1000/30 = 33.33 ms,1000/25 = 22.22 ms。 这算法还真是基于神经网络的,前几天谷歌还发布了RAISR这同样基于神经算法的新算法,不过谷歌的这个是用于实时处理的,只是还没流出…… 牛肉蛋花粥 发表于 2016-11-19 18:56
然而我和很多人一样遇到了Ctrl+J以及NGU无法启动的问题···········
今天更新了0.91.1,BUG应该修复了。 wby238 发表于 2016-11-20 11:19
这算法还真是基于神经网络的,前几天谷歌还发布了RAISR这同样基于神经算法的新算法,不过谷歌的这个是用于 ...
madshi说NGU和NNEDI没关系,如果是类似于waifu2x的神经网络算法的话,应该会有体积不小的models文件的吧。