First, Ihad to expand my gesture data array to accommodate for data points from up to twohands and make sure the data was stored properly. Next, I had to create anothergesture data structure to store movement data, and fill this new array with the correctdata. In order to calculate the change in position efficiently, I stored the previous dataframe captured from the Leap Motion and sent it to the next call to capture the frameupdate. From there it was just simple subtraction and placement in the movement dataarray. At this step I also draw lines in the pygame window to help visualize how eachdata point is moving. From here the movement data needed to be reshaped and resizedfor efficiency. I only kept data about the fingertips and and the tips of the pinkymetacarpals and index metacarpals. Most moving gestures with regard to this set onlyrequire information about finger tips and the rotation of the hand, so all other data pointsare pretty redundant. From there it was off to the classifier. Movement data was brokenup into blocks of 30 frames. For gestures involving movement is used 30 frames of datato be sent to the KNN predictor. If the normal positional KNN predictor could notestablish a correct prediction within 30 frames, the positional predictor would be used.After every call to the KNN predictor I have the game loop sleep for a few millisecondsto allow the Predictor to catch up, along with
First, Ihad to expand my gesture data array to accommodate for data points from up to twohands and make sure the data was stored properly. Next, I had to create anothergesture data structure to store movement data, and fill this new array with the correctdata. In order to calculate the change in position efficiently, I stored the previous dataframe captured from the Leap Motion and sent it to the next call to capture the frameupdate. From there it was just simple subtraction and placement in the movement dataarray. At this step I also draw lines in the pygame window to help visualize how eachdata point is moving. From here the movement data needed to be reshaped and resizedfor efficiency. I only kept data about the fingertips and and the tips of the pinkymetacarpals and index metacarpals. Most moving gestures with regard to this set onlyrequire information about finger tips and the rotation of the hand, so all other data pointsare pretty redundant. From there it was off to the classifier. Movement data was brokenup into blocks of 30 frames. For gestures involving movement is used 30 frames of datato be sent to the KNN predictor. If the normal positional KNN predictor could notestablish a correct prediction within 30 frames, the positional predictor would be used.After every call to the KNN predictor I have the game loop sleep for a few millisecondsto allow the Predictor to catch up, along with