How to move octree data correctly from parent to child?

This is my first question, so apologies if it is a repeat, I have had a look at the other questions and I still can’t seem to find an answer to this.

So I am making a recursive octree in OpenGL. So far I have everything that I think should make up the octree correctly. If I draw all of the partitions (nodes) first (as in, through the constructor of the octree), the octree displays fully and correctly (with a depth level of 3).

However, I am wanting to add partitions only when I need them, I have cubes moving around inside the big root node and they bounce around inside and never leave. The problem I am having is trying to make the partitions appear if the maximum threshold of the nodes’ items are exceed. Currently, it only works for the bottom left front node.

Here is where I think the problem is occuring, and also, where I am trying to dynamically add in partitions:

// Inserts data from the main function into here for checks.
void Node::Insert(Data* all_data)
    // If there are children.
    if (!nodes_.empty())
        // Check to see if the data collides with the child node.
        Node* child_node = GetChild(all_data);

        // If the data does collide.
        if (child_node != NULL)
            // Recurvisely call the insert function on data and the child node.

    // If the amount of data exceeds the limit of the node.
    if (data_.size() >= MAX_ITEMS_)
        // If we have not gone too deep into the octree already.
        if (depth_level_ < MAX_DEPTH_LEVEL)
            // Split the current parent octant up into smaller child octants.

            // THE PROBLEM IS HERE.
            // Passing in the data from the parent node.
            // Iterate through all of the data in the parent node.
            for (auto node_data = data_.begin(); node_data != data_.end(); node_data++)
                // See if any of the data collides with the NEW child node.
                Node* child_node = GetChild((*node_data));

                // If the data does collide with the NEW child node.
                if (child_node != NULL)
                    // ACTUAL PROBLEM HERE.
                    // Push the data from the parent node into the child node respectively.

            // Empty the data vector for the parent/this.

    // If there are no child nodes.
    // Push the data into the parent/this data vector.

This is all inside of my “Insert(Data* data)” function, this function is called every frame on all of the cubes (data) bouncing around inside. Can anyone see what I am doing wrong? Or point me in the direction of some good material for insertion please? From testing, all I can tell is that the iterator shown above seemingly does nothing to the application.


EDIT: I am using Visual Studio 2013 with C++11.

Editing the pixels of a rendered image

I want to render a simple OpenGL scene as usual, but then I want to superimpose a small image of my own (such as from a bitmap file) on top of the render, such that this image always shows. For example, this could be thought of as showing a logo in the corner of the screen for a 3D game, where the logo is always displayed on top of the rendered scene.

Please could somebody start me off in the right direction? What should I be looking into? I am rather a novice at OpenGL…

Let us suppose that I have the following code:

#include <GL/glut.h>

void renderScene(void)



int main(int argc, char **argv)
    // init GLUT and create Window
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
    glutCreateWindow("GLUT Triangles");

    // register callbacks

    // enter GLUT event processing cycle

    return 1;

This renders a triangle on the screen. How, for example, would I now render a 10-by-10 bitmap from file, at location (100, 100) on the screen? If the viewpoint was static, I could just calculate its 3D location and render it. However, I want the bitmap image to always be displayed in this location, even when the viewpoint changes.

Thanks :)

Need help solving an acute-triangle to find the distance to the outside of my object

I recently asked a question here: How would I check the range against the entirety of an enemy object, and not just it's transform.position?

That was accurately answered, and works perfectly fine. However, now I need to find the distance from the center of a non-rectangular object to it’s edge as illustrated here: enter image description here

I have the length of Side A, and the angle between Side A and Side C, as well as the angle between Side A and Side B. I am missing a couple things:

  • The length of Side B
  • The length of Side C

My end goal is to get the length of side C. It is important to note that my math skills are barely comparable to first year college algebra. I heavily rely on the methods provided within Unity to calculate dot products, angles, and distances without much understanding of their inner workings.

How would I go about accomplishing this?

Coroutine to move to position passing the movement speed?

I’m using this coroutine to move my player to a new position. The problem is that it takes the duration as an argument instead of the speed of the object. Which makes it hard for me to make sure the object always moves at the same speed.

How can I create a coroutine that takes the source position, target position and movement speed to move my character?

This is my current code:

IEnumerator MoveObject(Vector3 source, Vector3 target, float duration)
    float startTime = Time.time;
    while(Time.time < startTime + duration)
        player.transform.position = Vector3.Lerp(source, target, (Time.time - startTime) / overTime);
        yield return null;
    player.transform.position = target;

How to draw square composed of four triangles ? ( libgdx )

I’ve a case in which I need to draw square composed of four triangles like in the photo bellow :

enter image description here

The triangles parameters are stored in the JDBC , I know how to draw shapes in Lbgdx but this kind of shape seems a bit tricky to me , any help would be greatly appreciated .

LibGdx – How can I correctly handle tile transitions / connected textures?

So I am making a tiled game and I need to connect textures correctly based on whats next to it.

With tiles that had connection really close to the edge of the tile I could just use the 4 bit meathod ie:

x 1 x
8 x 2
x 4 x

But for tiles with more detail I need to have corners too. I have seen people use the 8 bit meathod ie:

1   2  4
128 x  8
64  32 16

But really I can only see 48 different tile combos.

Is there a way I can separate corners and u d l r. Something like this:

x 1 x  8 x 1
8 x 2  x x x
x 4 x  4 x 2

Or is there a better way all together to do this.

This is what I have tried so far but this only uses 32/48 tiles and doesn’t work for special corner cases.

public class RockTile extends Tile {

int y;

public RockTile(int id) {
    y = 7;

public TextureRegion getTexture(Level level, int xLoc, int yLoc){

    int u = (level.getTile(xLoc, yLoc + 1) != this) ? 1 : 0;
    int r = (level.getTile(xLoc + 1, yLoc) != this) ? 2 : 0;
    int d = (level.getTile(xLoc, yLoc - 1) != this) ? 4 : 0;
    int l = (level.getTile(xLoc - 1, yLoc) != this) ? 8 : 0;

    int bit = u + r + d + l;

    int ur = (level.getTile(xLoc + 1, yLoc + 1) != this) ? 1 : 0;
    int dr = (level.getTile(xLoc + 1, yLoc - 1) != this) ? 2 : 0;
    int dl = (level.getTile(xLoc - 1, yLoc - 1) != this) ? 4 : 0;
    int ul = (level.getTile(xLoc - 1, yLoc + 1) != this) ? 8 : 0;
    int corner = ul + ur + dl + dr;

    if (bit == 0) {
        if (corner != 0)
            return Game.splitTiles[y + 1][corner];
            return Game.splitTiles[y][bit];

    } else {
        if (corner != 0)
            if (dl > 0 && l == 0 && d == 0)
                return Game.splitTiles[y + 2][bit];
                return Game.splitTiles[y][bit];

            return Game.splitTiles[y + 2][bit];



If you need anything else let me know.

EDIT: Side question because I’ve seemed to have stumped you guys. If I decided to use the 8 bit method what would be the best and fastest way to make a lookup table for that.

Sphere to plane collision never rearching resting contact

I have been trying to have a sphere to plane eventually lead to the sphere coming to a resting contact, but my sphere will end up always bouncing forever. It will bounce correctly for the first few times, but eventually it will just be in a constant state of a very small bounce. I have tried to exhaust every article I can find but I don’t see where I am going wrong here. Does anyone see the issue? I have attempted to put as much information into this post so I apologize if it is a bit long.

I first attempt to see if there is resting contact, collision, no collision or if the sphere and plane will collide

Vector3 forces = (sphereTotalForces + sphereVelocity);
float denom = Vec3Dot(&plane->GetNormal(), &forces);

if(sphereVelocity.GetMagnitude() > 0 && dist <= sphereRadius + .0013f) {
    crCollisionResult.enCollisionState = RESTING_CONTACT;
    crCollisionResult.vCollisionNormal = plane->GetNormal();
    return crCollisionResult;
else if(fabs(dist) + 0.001f <= sphere->GetRadius()) {
    crCollisionResult.enCollisionState = COLLISION_HAS;
    return crCollisionResult;
else if(denom * dist > 0.0f) {
    crCollisionResult.enCollisionState = COLLISION_NO;
    return crCollisionResult;


sphereRadius  = 1
denom = -9.98206139
dist = 2.34482479

Since none of those are true I will then attempt to calculate when the collision will happen

    F32 fIntersectionTime = (sphere->GetRadius() - dist) / denom;
    F32 r;
    if(dist > 0.0f)
        r = sphere->GetRadius();
        r = -sphere->GetRadius();

    Vector3 collision = spherePosition + fIntersectionTime * rigidBodySphere->GetVelocity() - r * plane->GetNormal();
    crCollisionResult.fCollisionTime = fIntersectionTime;
    crCollisionResult.vCollisionNormal = plane->GetNormal();
    crCollisionResult.vAdjustedPosition = collision;

The intersection time will be calculated to be 0.00334050134 , which is less than the fElapsedTime of 0.00703109941. As a result, I will update my sphere using the intersection time as the delta (so fElapsedTime here will be 0.00334050134)

    mAcceleration = mTotalForces / mMass;
    mVelocity += mAcceleration * fElapsedTime;
    Vector3 translation = (mVelocity * fElapsedTime);

    mTransform->Translate(translation.x, translation.y, translation.z);

    mTotalForces = Vector3(0.0f, 0.0f, 0.0f);


mTotalForces = (0, -10, 0)
mMass = 22

After I update there sphere with its new position, I will then calculate the new velocity of it being reflected

        Vector3 vSurfaceNormalized;
        Vec3Normalize(&vSurfaceNormalized, &crCollisionResult.vCollisionNormal);

        Vector3 vIncomingVelocity = pSphere->GetVelocity();
        F32 fIDotN = Vec3Dot(&vIncomingVelocity, &vSurfaceNormalized);

        Vector3 vReflectedVelocity = vIncomingVelocity - ((1.0f+pSphere->GetCoefficientOfRestitution()) * vSurfaceNormalized * fIDotN);


pSphere->mCoefficientOfRestitution = .85
vIncomingVelocity = (0, -5.30, 0)
mNormal = (0, 1, 0)
fIDotN = -5.30350208
vReflectedVelocity = (0, 4.50797653)

Why are 16×16 pixel tiles so common?

Is there any good reason for tiles (e.g. Minecraft‘s) to be 16×16?

I have a feeling it has something to do with binary because 16 is 10000 in binary, but that might be a coincidence.

I want to know is because I want to make a game with terrain generation, and I want to know how big my tiles should be, or if it doesn’t really matter.

Unity changing texture of RawImage from code

I have a UI element, RawImage, which I would like to change the texture of based on which random number between 1 and 6 is generated.

I want to change the picture of the dice, but I don’t know how to access the RawImage.

This is what I’ve tried, I know it’s terribly wrong, but have no clue how to handle this.

using UnityEngine;
using System.Collections;
using System;

public class RollTheDice : MonoBehaviour {

    public GameObject[] dices;

    public void OnClick(){
        System.Random dice = new System.Random ();
        int dicePoints = dice.Next (1, 7);

        Debug.Log (dicePoints);

        if (dicePoints == 1) {
            GetComponent<RawImage>().texture = dices.ElementAt[0].texture;


    // Use this for initialization
    void Start () {


    // Update is called once per frame
    void Update () {


If I'm editing in Photoshop, is there any advantage to using Camera Raw directly rather than Lightroom first?

If an image is to be edited in Photoshop eventually, is it better to do the initial raw file processing in Lightroom, or should I use Camera Raw inside Photoshop to apply the same settings I would otherwise do in Lightroom? Does this make a difference in the outcome of the final image? Is one workflow recommended under certain circumstances?

Question and Answer is proudly powered by WordPress.
Theme "The Fundamentals of Graphic Design" by Arjuna
Icons by FamFamFam