Advertisement

Making my own micro facet BRDF

Started by September 04, 2024 12:06 PM
30 comments, last by MagnusWootton 1 day, 7 hours ago

I added a sky light to it, and its yellow one way towards the light, but away from the light its all blue, the sky light.

Towards the light the yellow, facing away from the light is blue!?!?!??

I read what u wrote to me, I'm actually computing this differently (Perhaps in a worse way) I'm actually just starting from the eye, hitting the land, computing frensel, splitting to two rays, one reflection one refraction, then I just go in the terrain, then out the terrain, then in the terrain then out the terrain, reflecting and refracting, and all the rays end up not hitting the light, only the sky.

Thats probably the wrong way to do it, I'll have a think about what u've written to me and pull out something hopefully related to it, because maybe I've explored this one enough.

It seems I mostly get forward scatter and back scatter, and I get very little side scatter.

The thing that is really bugging me tho, is without the skylight, I get very little light on the normals facing the light, because I have to be! the light is splitting up into all directions so its guaranteed to be weaker, but that means I cant even light it up with a light properly because its all at 25% the light it should be, because its gone in all directions not just straight back to the camera/light!!!

So I have a major problem there.

I did it with the skylight turned off… and actually… its not as bad as I thought.

I dont know could have been a bug, but the diffuse light looks about right after all. WTH….

Advertisement

MagnusWootton said:
I'm actually just starting from the eye, hitting the land, computing frensel, splitting to two rays, one reflection one refraction, then I just go in the terrain, then out the terrain, then in the terrain then out the terrain, reflecting and refracting, and all the rays end up not hitting the light, only the sky.

I can't imagine well enough. It raises soem questions:

‘splitting to two rays’:
How do you calculate the refelction ray directions if you hit the terrain? (If it's just reflection about the normal that's a perfectly smooth material with only sharp reflections, but no microfacets at all, which cause a rough surface, scattering the rays into multiple directions.)
Why do you use a refraction ray? It looks like a solid opaque material, so there are no refractions.
Probably you want some subsurface scattering, transparency or caustics i guess, but it's easier to start with opaque materials.

‘in and out the terrain’:
Sounds you use paths?
Drawing some illustration on how this basically works:

I have numerated the path segments in order.


So we begin with the first ray 1 from the eye through the current pixel. (In games this ‘primary ray’ usually is not traced, but instead we rasterize the scene and calculate the first hitpoint from depth. But if we do trace it, we can choose a random subpixel position to achieve antialiasing from accumulating multiple samples, where one sample means an entire path.)

We hit the wall. From there we trace one shadow ray to a random light, and if it's visible so we shade the wall material with that light and add the contribution.

Then we calculate a reflection direction. The wall has diffuse material, so our potential directions cover the whole hemisphere of the surface normal, but with a cosine weighted distribution. That's the blue dots. Easy to calculate them from a given random seed. Our seed gave us the direction of path segment 2.

So we trace that ray and see where we hit something. It's the floor. Again we trace a shadow ray to a light, but it's occluded, so no nothing to add.
If the light would be visible, we would again shade the floor material with the light.
But then we need to propagate this reflected light back to hitpoint 1, which is the one we see from the eye at the current pixel. So we want to shade hitpoint 1 again with the light coming from from hitpoint 2.
Technically this is elegantly handled with the recursion, so it's not as complicated as it sounds.

Then the whole process recurses once more - make a random direction fitting the hit material brdf, trace segment 3, do the shadow ray, propagate back to the first hitpoint. And so on.

If we decide to stop after 3 segments, we have captured direct lighting and 2 bounces of indirect lighting.
So the cool thing about path tracing is that the same simple algorithm gives us everything: soft shadows, glossy or sharp reflections, and even indirect bounces.
But ofc. we need many samples to get a good estimate, and to reduce the noise coming from picking random directions. For a noise free image of the Cornell Box i need indeed 4000 paths per pixel (but i don't do shadow rays, so my paths only contribute if they randomly hit a light directly). Looks like that:

You can imagine, if we repeat this process 4000 times per pixel, all our randomly chosen directions should cover the entire hemisphere, and only then we get a result coming close to reality. (though, complex brdfs need more samples than simple ones. Caustics are very hard to capture for example.)
It's Monte Carlo integration, and this concept of utilizing randomness is likely what feels new to us if we move from rasterization to raytracing.

You may not feel very interested in path tracing becasue it's so expensive or feels too advanced. But it's the foundation of ray tracing and worth to know in any case.

Btw, the blue sky is… just blue? So now you have solid blue instead solid black. That's not buch better. ; )
You should use real world environment maps, like here:

This gives much better feedback on how reflections look, and it's cheap.
Your mesh also is not really ideal, since almost all normals point just upwards. Putting some spheres and boxes on top would help as well.

That ray diagram is similar to mine, except mine actually goes into the surface and back out again, I did that because my favorite cook torrence shader handled transmit as well, and it gives u a more organic look.

I only handle the transmits that enter and then leave, if it enters and stays in. (which happens when its smooth) Then I just abort these rays and normalize them out of the sum to sorta handle them in an approximate way.

So I chopped off all the fractal harmonics except for one, and this is my result for a smooth surface.

Its actually not fully smooth, it has little integer steps in it as it goes up and down, and I dont know if this is causing extra transmission or not, but the strange thing is, it seems to be lighting up backwards on the reverse normal, not the one facing the light and this could be a bug.

yes it was a bug, here the little facets are acting like mirrors now, properly, and the whole system is better.

the sun is unrealisticly sized (its about 90 degrees) but Ive got a cool realistic scattering effect from the sun in the diffusion direction! its not totally amazing, but its pretty cool - if u do the diffuse the hard way, use a slightly bigger sun to handle the lack of rays, you get a cool fire&ice mix on the ground reflection!

Advertisement

so here it is without the roughness on the fractal.

and adding noise to the surface. (this is nearly identical material factors)

JoeJ said:

‘splitting to two rays’:
How do you calculate the refelction ray directions if you hit the terrain? (If it's just reflection about the normal that's a perfectly smooth material with only sharp reflections, but no microfacets at all, which cause a rough surface, scattering the rays into multiple directions.)
Why do you use a refraction ray? It looks like a solid opaque material, so there are no refractions.
Probably you want some subsurface scattering, transparency or caustics i guess, but it's easier to start with opaque materials.

‘in and out the terrain’:
Sounds you use paths?

So when I enter the terrain, or a leave the terrain, I call fresnel to get the split between refraction and reflection. then I call the “snell function” and the reflect function and I get the two rays.

Then I just keep doing that from the rays current viewpoint, so when ur looking at the surface, what your actually seeing is splitting off into lots of different directions exponentially on a power of 2. (Because u split to refraction and reflection each time.)

Its sorta like quantum tunnelling, (photon tunnelling style) in that you go through the little micro hills, hit the other side and then u bounce onto another hill side next to it, and then reflect off to the light, but u just handle it with one general repeating kernel, all hits are identically managed.

I can only handle low amounts of max rays, and I'm finding I cant handle a small reflection light and I dont get as much perpendicular scatter as back and forth scatter. (forth scatter you get the most of, back scatter u get slightly less, and 90 degree scatter is non existant, very hard to get!)

I've added subsurface in the form of 3 identical layers of the fractal sitting between air layers and the rays tunnel in and out of the layers, but I havent had much success with it yet, Maybe it does look a little more volumous with it, but bugs have stopped me exploring that one much yet. Still getting bugs out!!!

It definitely is easier just handling in the ray enter and leave cases, and the ray enter cases I just normalize them out of the result. when the ray enters and doesnt leave, its usually because the surface was smooth, didn't have enough pores, thats when it happens more, but because my fractal is matte and noisey on the surface it only happens on a low percentage of the view.

I've never done this before myself, and I wasn't sure it was going to work, and halfway through when my light went completely dead I was starting to doubt it was true! But I've got it working now, add roughness to the fractal (and it works fine with micro detail, ~100 micrometre bumps are all you need, doesnt have to go to full nano detail.) and it transmutes from a metallic surface to a matte surface, just from hitting so many different normals at once during the scatter.

Its pretty amazing!

JoeJ said:

You can imagine, if we repeat this process 4000 times per pixel, all our randomly chosen directions should cover the entire hemisphere, and only then we get a result coming close to reality. (though, complex brdfs need more samples than simple ones. Caustics are very hard to capture for example.)
It's Monte Carlo integration, and this concept of utilizing randomness is likely what feels new to us if we move from rasterization to raytracing.

You may not feel very interested in path tracing becasue it's so expensive or feels too advanced. But it's the foundation of ray tracing and worth to know in any case.

Btw, the blue sky is… just blue? So now you have solid blue instead solid black. That's not buch better. ; )
You should use real world environment maps, like here:

This gives much better feedback on how reflections look, and it's cheap.
Your mesh also is not really ideal, since almost all normals point just upwards. Putting some spheres and boxes on top would help as well.

If u wanted to involve it with path-tracing it actually fits like a glove, its more computational, but if u keep firing random rays youll converge to the result the same just with a bit of extra mm surface tracing going with the main bounces doing the photon tunnelling.

Its not going to be the cheapest, but I think this may become a little faster than it is now later on, I just need to work on it more. and the cool thing about it is the roughness geometry is actually fully virtualized, theres no dot product computes (except comparing the ray angle to the sun angle), no ggx equation, its just there on the fractal, and it made it possible for me to even do it at all. and if u add some micro noise harmonics - goes from metal to snow!! 🙂 reflective to matte, first time I ever saw it myself, quite unbelievable, hit enough normals? - u get matte.

DISCLAIMER→

when u hit a matte surface, u can just randomize the reflection vecter and call it done.
doing this photon tunnelling - is basicly doing the same thing! but its slightly different.
but is it any better?? or even realistic at all.

anything that hits something, and then comes out a random normal will do it.
so u can see its that easy to go from reflective to matte.
but its how it randomized the normal- is what the material LOOKS LIKE! so a photon tunnelled material, is a dif way to approximate it!!!!

you can be unrealistic completely - but if that normal randomizes - then u get matte! it doesnt matter how you randomize it!!!

so its not just getting the matte, is the big deal, its how u got the matte!!!! (if its PBR or not.)

so just treating it like glass - could be completely WRONG! but it is another way to randomize the normal!
and thats all u need to do to get matte.

so mine is just another way to randomize the normal!!! so its probably alot fake!

It is nice tho, if you self shadow grass it looks pretty good, and this is like 100 little grass shadows all at once, so is that the real thing? or is it not quite it yet.

but it could be more realistic than just randomizing the normal! so its not that bad.

but i doubt its the finished thing yet. theres more to take into account.

MagnusWootton said:
you can be unrealistic completely - but if that normal randomizes - then u get matte! it doesnt matter how you randomize it!!!

It does matter a lot in general, it's just less noticeable on matte / diffuse / rough materials.

Here is a snippet to genreate a random vector on the hemisphere, with uniform distribution:

static vec RandomDir ()
			{ 
				float r0 = rand();
				float r1 = rand();
				float sinTheta = sqrt(1 - r0*r0); 
				float phi = float(PI*2) * r1; 
				float x = sinTheta * cos(phi); 
				float y = sinTheta * sin(phi); 
				return vec(x, y, r0); 
			}

If you use this without additional weighting to compensate for the distribution error, your result will look somewhat similar to real diffuse materials, but it will be wrong.

Here is the snippet with a cosine weighted distribution:

static vec RandomDirCosineWeighted ()
			{ 
				float r0 = rand();
				float r1 = rand();
				float r = sqrt(r1);
				float a = r0 * float(PI*2);
				vec d(cos(a) * r, sin(a) * r, 0);
				d[2] = sqrt (1 - r*r);
				return d;
			}

This gives correct / realistic results.
With something like the Cornell Box scene, the difference is very noticable to the bare eyes.

(rand() gives a random number between 0 and 1. Z should align with the normal, so you need to rotate those resutls from z to normal.)

MagnusWootton said:
doing this photon tunnelling - is basicly doing the same thing! but its slightly different. but is it any better?? or even realistic at all.

You mean the refraction rays that go inside the material?

Technically this actually becomes a volumetric lighting problem, similar to clouds or fog rendering.
It requires a definition of the material inside the object, and this material may change, e.g. from dense to sparse fog. If so, we would need to sample multiple steps while we travel through the matter (ray marching), and we might also want to randomize ray direction at each step to capture scattering effects.

But often we just assume a constant material, e.g. skin or simple fog in Quake, which makes things easier and less expensive.

MagnusWootton said:
so is that the real thing? or is it not quite it yet.

Yeah, the problem of uncertainty.

That's why i used the Cornell Box scene. The materials are specified and simple Lambert diffuse, so everybody can compare his render to reference images. I made reference images with programs like blender, which isn't so easy becasue we need to turn off all ambient lighting, and tone mapping / gamma stuff to compare linear color space results. But it workd, and when my image matched the reference i was sure my stuff is right.

For more complex materials it quickly becomes impossible to have a refernece.

However, you can also ignore correctness and just try to get something that looks good to you.

Advertisement