- from gitbash: `> git clone http://prod3.imt.hig.no/overkill-studios/imt2531-assignment2.git`
4. In Visual Studio: go to `File->Open->CMake...`, and select the CMakeLists.txt file located in `./yourdesiredworkdir/imt2531-assignment2/`
##### Additional/Optional setup
- Running the program with different scenes:
To start the program from a differet scene all you need to do is specify another configuration in the file named `launch.vs.json`, this can be found in the hidden directory `.vs`, or by going to `CMake->Debug and Launch Settings`.
Setting the args of a configuration to reflect a scene will make that scene open upon building & debugging with said configuration.
## MacOS
## Linux
Since we are using c++17 features, a newer compiler is necessary.
We are also using python3 or above.
```bash
# Install Clang++ 5 (allowing c++17 features to be compiled)
sudo apt-get install clang++-5.0
# Install python3 (NOTE: a lot of ubuntu distros already come with this)
sudo apt-get install python3.5
#Remove previous installations of cmake
sudo apt-get remove --purge cmake
mkdir ~/temp # make a temp folder for the cmake binaries
@@ -80,13 +148,71 @@ We want to model an entire scene graph.
- [x] Update render of loaded Entity on file change <br>
# Full feature list
### Rendering of 3D models.
3D models are rendered in a GLFW window.
### Loading of 3D models from custom simplified format.
The costum file format in yaml that consists of a vertex count, then a list of vertecies where each define a point in space, its normal vector, its UV coordintes and lastly color. <br>
Then a mesh count, describing how many meshes are to come.<br>
Each mesh has its name, material, shader and count of how many triangles it consists of. Each triangle lists three vertex indecies to define it.
### Python script for blender, to export to custom model format.
### Transorms, complete with position, rotation, scale, velocity,angular velocity, and children.
Entities in the framework have a tag(string) that is unique, with a corresponing id(int) that is equally unique. The reason is that int are much faster to address by during runtime. However strings are much more readable.<br>
The entity uses 6 glm vector3s to keep track of its position, rotation, scale, velocity and angular velocity. These vectors are in model space, if the entiry is root in the world, that vector is in the world's model space, more commonly refered to as world space. Hovever if it is child of another entity, the position is relative to it's parent's. <br>
Each entity also has a list of entity ids that describe which other entities in the world that are children of the entity. When the entity updates, the model matrix is passed to each of the children.
### Shader programs combined in single files. Parsed during runtime.
Our shader program parser reads entire shaderprograms written as a single file with the directives `#shader vertex | geometry | fragment` marking the start of different shaders.
This allows users to more clearly see the flow of input/output between shaders.
### Additional syntax for shaders
As mentioned above, we've added custom directives for the parser to handle. in addition to the `#shader` directive, we've added the `#prop` directive.
The `#prop` directive is used for setting program specific draw properties.
The draw properties we allow to change through this directive are:
- Culling
`#prop Cull <[on | off] [back | front | both]>`
- Depth Test
`#prop ZTest <on | off>`
- Blend Function
`#prop Blend <[on | off] [A | B]>`
where A and B may be one of the following (see [OpenGL blending](https://www.khronos.org/opengl/wiki/Blending) for further explanation):
if these directives are not used, the program will default to the following:
```
Cullface: Back
BlendFunc: SrcAlpha, OneMinusDstAlpha
DepthTest: On
```
These directives allow shader programs to take control of the way they are rendered by the renderer by swapping the state of said properties on just before anything is drawn (i.e. on binding), and disabling them as soon as the drawcall is done.
### Material files coupled with shaders that are attached to models. Transformable enity.
yaml
### Point lights. Transformable entity.
### Directional light. Transformable entity(only rotation and angular velocity).
### Cameras using perspective projection
There can be several cameras in the scene. You can switch between them using TAB. There are also two camera modes, FREELOOK and ORBITAL.
The freelook mode is like the noclip mode first person shooter often have when spectating a game. You can look around with the mouse, and move in whatever direction you're pointing(using WASD-QE).
Orbital mode is similar to freelook. However, when moving the mouse, the camera will orbit around origin of the camera's parent. If the camera is root that's 0,0,0 world space.
Implementing a third camera mode would be pretty painless as the structure of the camer
### Loading of scene files that defines the layout of cameras, models, directional light, and pointlights.
The scene file consists of cameras, entities(models, and empty nodes), point lights, directional light, and child-parent relations. <br>
The file starts with a camera count, then the camera entities.
Each camera entity has position, rotation, velocity and angular velocity. Followed by the camera mode...
(good feedback)
### Child-parent relationship between entities where children inherit their parent's transormation.
-
# Future work / Discussion
## Reflection
## 1. Split up \<model>.yml files into \<mesh>.yml and \<prefab>.yml files
### Things we would have done differently
#### Decouple vertex-mesh from material and shader
### How do we do it now?
Currently in our model-files we are specifying which shader and material are to be used in the rendering of our models.
This has created a strong coupling of a mesh to the shader and material. Stronger than we would like.
...
...
@@ -94,49 +220,82 @@ This has created a strong coupling of a mesh to the shader and material. Stronge
I will mention two big problems with this:
1. It is not convenient to go inside the model file, in between all the vertices and triangle to edit the materials and shaders attached to that model.
2. For every new model with the same vertices, but with different materials, we have to duplicate all the vertices and indicies. This does not make sense for a big model.
2. For every new model with the same vertices, but with different materials, we have to duplicate all the vertices and indicies. This does not make sense for a big model. Loading times will get out of hand quickly.
Currently systems in our 3D engine reloads ALL files every time the users asks for an update. If a shader is edited, the user click `KEY_1` and all shaders are reloaded and compiled.
After using our system, we see that it would be better to just update the single file a users saves changes to. This should also just happen automatically. I file watcher would signal the engine of which file it should reload/recompile.
An attempt was made to actually implement this in our project but we did not succeed finding a good solution within fair time.
A few solutions was explored
### std::filessystem (c++17 library)
This is a really good candidate for solving this issue. The problem is that very few compilers have support for this library as of april 2018. It would be awesome to just include a standard library.
The boost::filesystem is the inspiration for std::filesystem, and is currently in a much more stable state. The BIG drawback is that it require installation of the boost library, which is not a trivial thing to have as a dependency. The library in itself is huge, and is not easy pick specific modules. The size of the library is around 2.3 GB on windows. I could not succeed installing the library on windows.
Unix stat is in theory supported on all 3 major platforms(linux, windows and mac). I could not get it to read modified status during run-time. I could read it every time the program booted up, but I then had to restart the program to get updated information.
As with boost::library this is probably a good stable solution. Nonetheless we chose to not explorer this for the same reasons we did not go any further with boost::filesystem.
We wanted to keep our project as simple as possible to maintain, and keep the project size as small and clean as possible. QT is a big library, and does not fit our ethos.
I did not explore this, and I am regretting it. Filesystems differs widely from platform to platform. This might be the reason why it is so hard to find a standardized way of reading file status during run-time on C++.
If I just allowed myself to be platform specific I might have made everything a lot easier for myself solving this specific issue.
*SO Different platform-specific API's* - https://stackoverflow.com/a/931165/9636402 - 30.04.2018
### Remote process call to Python through a pipe
This is what we actually ended up pursuing. We got file discovery working, but we did not get file watching running smoothly. If we have had more time there was a clear path to getting it working this way.
Pythons libraries are by nature cross-platform as long as you can run the python run-time and install libraries with `pip`.
Here is a github repo which was used to explore this direction - https://github.com/Arxcis/filewatcher/blob/master/watcher.py.
The python code was trivial. The most difficult was the communication across the pipe. Some sort of real-time communications protocoll has to be implemented across the pipe.
Also there is problems with blocking system calls to read and write to the pipe. This could have been solved with multi-threading or using timeouts and heartbeat signals, which both are non-trivial things to implemented.
In the end we just settled with file-discovery on boot up for now.
Loading models was the obvious performance limitation of our system. Loading a Bunny with 70k vertices and 144k indicies took 4-5 seconds.
This is documented in this issue on the project repo - http://prod3.imt.hig.no/overkill-studios/imt2531-assignment2/issues/41 - 30.04.2018
It was actually a lot worse in the beginning, but after replacing `std::stringstream` with c-style `scanf` improved performance a lot.
We could have gone a lot further though. After running performance analysis in Visual Studio we discovered that most of the time was not spent in the actual file loading, but parsing. Of the time in the parser a lot was wasted away in allocating temporary strings.
std::string_view does not hold any stringdata. Just pointers which is allocated on the stack. std::string on the other hand allocates by default no the heap. I would expect 2x-3x performance improvent of the loading, just by replacing all strings with strig_views in the parsing.
Allocating excessive amounts of temp strings is a known issue in the C++ community. Here is a hacker news discussion of how Google Chrome spends a lot of time allocating strings https://news.ycombinator.com/item?id=8704318 - 30.04.2018
## 4. Static Vertex VS Dynamic Vertex format
Our vertex format was very static. We only had a single way of representing a vertex
```
x y z nx ny nz u v r g b a
```
We did this to make the task less complicated. After working with this, it is not entirely clear how we would make this more dynamic in an elegant way.
Limitations:
* Only normal, lacking binormal and tangent.
* We support only 1 set of UV coordinates. Unity supports many sets of UV coordinates.
* We only support drawing triangles. Blender has triangles, quads and more.
More research has to be done on this for us to get a good grasp of how to handle the "Vertex-format situation" better. For the time beiing, we have something that works good enough for many 3D modelling tasks.
The strictness of our vertex format made it challanging to write the Blender parser though. Since Blender has many more possible ways of vertices, we have to convert between two datastructures which may not be entirely compatible.
We experienced many ugly atrifacts when rendering models exported from Blender. Triangles were missing. Stitching between two textures was ugly.
## 5. Packing vertexdata
We wanted to pack down the vertex data to a minimum practical size before sending it to the GPU.
We managed to pack the normals from 3 floats (12 bytes) -> 1 int (4 bytes). The packing happens when parsing the contents of the file. Since everything is bitwise opeartions it happens really fast, and does not affect load times considerably
With uv coordinates we are just compressing the dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe dhe d To fit a float inside short in we just multiply with 65k.
```cpp
floatu,v;
vert.u=65535U*u;
vert.v=65535U*v;
```
We are also reading colors into 4 bytes, 1 for each color + alpha. This is dont by scanf directly by using the `%c`.
The reason why our struct is 24 bytes is not by chance. For the VertexArrayObject to operate correctly, it needs data which is byte alligned by the nearest 4 byte. That is at least what we experienced. When we tried to pack uv coordinates from 4 to 2 bytes, we had to have extra padding to make sure our struct still was 24 bytes.
```cpp
structVertex
{
GLfloatx,y,z;// 12 bytes
GLintn;// 4 bytes
GLushortuv;// 2 byte <-- packing uvs yay
GLubyter,g,b,a;// 4 byte
GLushortpadding;// 2 byte <-- wasting space aww
// total still 24 bytes
};
```
Because of this it did not make sense to pack uv coordinates.