This sounds *extremely* CPU intensive. My best guess for an algorithm for this would be.
- Locate a texel that is a direct shot from the light source and trace a ray to it.
- Adjust the color and intensity of the light as well as the angle of the ray, refraction, etc.
- Determine the diffusion angle cone and find line-of-site texels in that cone and cast rays to each. Probably distributing the power over a curve. i.e. texels on the outside of the cone receive less power than those on the inside.
- Repeat from that point as if each of the diffused rays were a light source itself until the energy of the light falls below a darkness threshold.
- Do this mad hopping around for all light sources against all in-view texels again and again until all energy has been used up for each ray.
- Record any color strikes on the camera "retina", mix with previous strikes.
- display the camera's image
A cool effect would be instead of recording the strikes against the camera, record all the properties of each ray entering the lens of the camera, then by adjusting the focal point of the "retina" you could quickly re-render, adjusting the focus of the scene. That would look really cool I think.
Still, it seems like it could take years to render an image.. unless you did a net distributed thing.