Table of Contents
[BlocksExtension]

touching color () ?

Description

The block checks whether its sprite is touching a specified color. If it is, the block returns “true”.

Example

Learn how to interface a relay with Quarky, the versatile microcontroller, to control high-voltage appliances using electromagnetic induction.
Introduction

A relay is an electromagnetic switch that works on the principle of electromagnetic induction. A Realy is used to control the high voltage appliances using the microcontrollers. The relay has a primary side which is need to be connected to the controller and the secondary side is need to be connected to the load, for example, motor, light bulb, fan, etc. The primary side has 3 pins named as VCC, GND, and IN.  secondary also has connections named as Common(COM), Normally Open(NO), and Normally Closed (NC).

5V Single-Channel Relay Module - Pin Diagram, Specifications, Applications, Working

In this example, we will be interfacing a relay with Quarky,  We already know how to connect IR-Sensor with Quarky, and now we will be using ir sensor to trigger the relay. Let’s begin.

Circuit Diagram:

Code:

  1. Connect it Sesnor and Realy with Quarky as per the above circuit diagram.
  2. Open pictoblox and create a new file.
  3. Go to boards menu and select Quarky.
  4. Add an if-then-else block from the event palette.
  5. From the operators, add the “less than” operator in the conditional space.
  6. Go to Quarky and add the “read analog pin ()” block into the first space of the “less than” operator. Change the value to 500.
  7. Use the “set digital pin () as ()” block from Quarky to trigger relay connected at D1 if the value is less than 500.
  8. If the value is above the set value (500), then the LED must turn OFF.
  9. Place the above set of code in the “forever” loop.
  10. Now add “when flag clicked” at the start in the script.

Script:

Output:

In this comprehensive introduction, you have learned about Relay and its interfacing with Quarky,  the versatile microcontroller, and its potential in robotics and AI projects. Explore its various features, sensors, and plug-and-play functionality.

Read More
This project features a Three IR Line Following Robot with Quarky, using PID control and adaptive feedback for precise and smooth movement.

Steps:

  1. Connect the External IR module digital pin with the Quarky 
  2. Set IR threshold using a potentiometer on the module for line following and stopping the robot at the crossing lines
  3. The below example is of the PID line following blocks
  4. When you click doline following robot start line following and stop at the check-point (when both IRs are at the Black line)

Script

Read More
Learn how to connect your Quarky to a WiFi network and troubleshoot any connection issues. Follow the guide to ensure a successful connection.

In this example, we look at how to establish and see if the Wi-Fi is connected to Quarky or not. The code will connect the Quarky to a specified WiFi network and show a green status if the connection is successful, or a red status if the connection fails.

Alert: Quarky’s WiFi connection ability is only available when using the Upload Mode in Pictoblox. This mode allows users to write scripts and upload them to the board so they can be used while the board is not connected to a computer. This is useful for running scripts without a computer connection.

Code

# imported modules
from quarky import *
import iot

# Create a Wi-Fi object
wifi = iot.wifi()

# Change the Wi-Fi Name and Password
wifi.connecttowifi("IoT", "12345678")

# Run the loop to check if the Wi-Fi is connected
while True:
  # Check if the Wi-Fi is connected
  if wifi.iswificonnected():
    # Draw a pattern on the quarky
    quarky.drawpattern("ccccccccccccccccccccccccccccccccccc")

  else:
    # Draw a different pattern on the quarky
    quarky.drawpattern("bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb")
  1. This code is using the Quarky library to draw a pattern depending on the status of the WiFi connection. If the WiFi connection is successful, it will draw the green light on the Quarky LEDs. If the Wi-Fi connection is unsuccessful, it will draw a red light on the Quarky LEDs.
  2. Additionally, it imports the iot module and connects to a Wi-Fi network with the name specified name and the password.
  3. It also has a while loop that checks to see if the Wi-Fi is connected and will draw the appropriate pattern accordingly.

Output

Alert: If you do not yet have the IoT House assembled, please refer to this document to guide you through the assembly process: https://ai.thestempedia.com/docs/iot-house-quarky-addon-kit-documentation/iot-house-assembly-guide/.

Troubleshooting

  1. If the Green Light is displayed, your Wi-Fi is connected.
  2. If the Red Light is displayed, your Wi-Fi is not connected. Change the Wi-Fi and the password and try again.
  3. If the Red Cross sign is displayed, a Python error has occurred in Quarky. Check the serial monitor and try to reset the Quarky.
Read More
The project demonstrates how to interface the gas sensor to the Quarky and get the PPM reading. Later, we will create an air pollution monitoring system on Adafruit IO.

The project demonstrates how to interface the gas sensor to the Quarky and get the PPM reading. Later, we will create an air pollution monitoring system on Adafruit IO.

Adafruit IO Settings

We will be using Adafruit IO for creating a switch on the cloud. Follow the instructions:

  1. Create a new Feed named Gas Sensor.
  2. Create a new Dashboard named Sensor Monitoring.
  3. Edit the Dashboard and add a Gauge Block.
  4. Connect the Gas Sensor feed to the block and click on Next Step.
  5. Edit the Block Setting and click on Create Block.
  6. Block is added.

Circuit of Gas Sensor

The relay has the following connections:

  1. GND Pin connected to GND of the Quarky Expansion Board.
  2. VCC Pin connected to VCC of the Quarky Expansion Board.
  3. AO (Signal Pin) connected to Analog Pin A1 of the Quarky Expansion Board.

All About MQ2 Gas Sensor

Gas sensors are designed to measure the concentration of gases in the environment. MQ2 gas sensor is suitable for detecting H2, LPG, CH4, CO, Alcohol, Smoke or Propane. Due to its high sensitivity and fast response time, measurements can be taken as soon as possible.

Note:  The sensor value only reflects the approximated trend of gas concentration in a permissible error range, it DOES NOT represent the exact gas concentration. The detection of certain components in the air usually requires a more precise and costly instrument, which cannot be done with a single gas sensor.

MQ-2 Gas Sensor Sensitivity Characteristics:

The graph tells us the concentration of a gas in part per million (ppm) according to the resistance ratio of the sensor (RS/R0).

  1. RS is the resistance of the sensor that changes depending on the concentration of gas.
  2. R0 is the resistance of the sensor at a known concentration without the presence of other gases, or in the fresh air.

For air, RS/R0 = 9.8 for the MQ2 gas sensor.

Note:  According to the graph, we can see that the minimum concentration we can test is 100ppm and the maximum is 10000 ppm, in another word, we can get a concentration of gas between 0.01% and 1%.

Calculation of R0 for the Sensor

RS = [(Vin x RL) / Vout] - RL
  1. Vin is 5V in our case.
  2. RL is 10 kOhm
  3. Vout is the analog voltage reading from the sensor

We can simplify the above formula by omitting RL:

RS = (Vin - Vout) / Vout

From the graph, we can see that the resistance ratio in fresh air is constant:

RS / R0 = 9.8

To calculate R0 we will need to find the value of the RS in the fresh air using the above formula. This will be done by taking the analog average readings from the sensor and converting it to voltage. Then we will use the RS formula to find R0.

R0 = RS / 9.8

Calculating PPM for a particular gas

Let’s analyze the graph:

  1. The scale of the graph is log-log. This means that on a linear scale, the behavior of the gas concentration with respect to the resistance ratio is exponential.
  2. The data for gas concentration only ranges from 200 ppm to 10000 ppm.
  3. Even though the relation between resistance ratio and gas concentration may seem linear, in reality, it is not.

First of all, we will treat the lines as if they were linear. This way we can use one formula that linearly relates the ratio and the concentration. By doing so, we can find the concentration of a gas at any ratio value even outside of the graph’s boundaries. The formula we will be using is the equation for a line, but for a log-log scale. The formula for a line is:

  y = mx + b

Where:

y: X value 
x: X value 
m: Slope of the line 
b: Y intercept

For a log-log scale, the formula looks like this:

  log(y) = m*log(x) + b
Note:  The log is base 10.

Continue writing text from here.

Okay, let’s find the slope. To do so, we need to choose 2 points from the graph.

In our case, we chose the points (200,1.6) and (10000,0.27) from the LPG line. The formula to calculate m is the following:

m = [log(y) - log(y0)] / [log(x) - log(x0)]

If we apply the logarithmic quotient rule we get the following:

m = log(y/y0) / log(x/x0)

Now we substitute the values for x, x0, y, and y0:

m = log(0.27/1.6) / log(10000/200)
m = -0.473

Now that we have m, we can calculate the y-intercept. To do so, we need to choose one point from the graph (once again from the LPG line). In our case, we chose (5000,0.46)

log(y) = m*log(x) + b
b = log(y) - m*log(x)
b = log(0.46) - (-0.473)*log(5000)
b = 1.413

Now that we have m and b, we can find the gas concentration for any ratio with the following formula:

log(x) = [log(y) - b] / m

However, in order to get the real value of the gas concentration according to the log-log plot we need to find the inverse log of x:

x = 10 ^ {[log(y) - b] / m}

In the table given below, you can find the value of m and b for different gases.

Script

There are two steps to calculating PPM for the gas:

  1. First, we will calculate the value of R0. To calculate R0 we need to find out the value of Rs in the fresh air, this will be done by taking analog average readings from the sensor and converting it into the corresponding voltage value, then we will use the above formula to calculate R0, wait until we get the stable value of R0. Make this script in the PictoBlox to get the value of R0.
  2. After that, we will use the above-calculated value of R0 to find out the concentration of gases in ppm and send it to the cloud.
Note:  If you want to detect other gases, change the value of b and m in the program according to the sensor from the table.

Output

Upload Mode

You can also make the project work independently of PictoBlox using the Upload Mode. For that switch to upload mode and replace the when green flag clicked block with when Quarky starts up the block.

You can download the code from here: IoT- based Air Pollution Monitoring System – Upload Mode

Read More
Learn to demonstrate how to use ML Environment to make a model that identifies the hand gestures and makes the humanoid move accordingly.

Introduction

This project demonstrates how to use Machine Learning Environment to make a machinelearning model that identifies  hand gestures and makes the humanoid move accordingly.

We are going to use the Hand Classifier of the Machine Learning Environment.  The model works by analyzing your hand position with the help of 21 data points.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. You can click on “Machine Learning Environment” to open it.
  3. Click on “Create New Project“.
  4. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  5. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.
Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.
Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Python Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Python Coding Environment if you have opened the ML Environment in Python Coding.

code

The following code appears in the Python Editor of the selected sprite.

####################imports####################
# Do not change

import numpy as np
import tensorflow as tf

sprite=Sprite('Tobi')
import time
quarky = Quarky()
import time

humanoid = Humanoid(7, 2, 6, 3, 8, 1)

# Do not change
####################imports####################

#Following are the model and video capture configurations
# Do not change

model=tf.keras.models.load_model(
    "num_model.h5",
    custom_objects=None,
    compile=True,
    options=None)
pose = Posenet()                                                    # Initializing Posenet
pose.enablebox()                                                    # Enabling video capture box
pose.video("on",0)                                                  # Taking video input
class_list=['forward','backward','left','right','stop']                  # List of all the classes
def runQuarky(predicted_class):
    if pose.ishanddetected():
      if predicted_class == "forward":
        humanoid.move("forward", 1000, 1)
        
      if predicted_class == "backward":
        humanoid.move("backward", 1000, 1)
        
      if predicted_class == "left":
        humanoid.move("left", 1000, 1)
        
      if predicted_class == "right":
        humanoid.move("right", 1000, 1)
        
      if predicted_class == "stop":
        humanoid.home()
    else:
      quarky.stoprobot()

# Do not change
###############################################

#This is the while loop block, computations happen here
# Do not change

while True:
  pose.analysehand()                                             # Using Posenet to analyse hand pose
  coordinate_xy=[]
    
    # for loop to iterate through 21 points of recognition
  for i in range(21):
    if(pose.gethandposition(1,i,0)!="NULL"  or pose.gethandposition(2,i,0)!="NULL"):
      coordinate_xy.append(int(240+float(pose.gethandposition(1,i,0))))
      coordinate_xy.append(int(180-float(pose.gethandposition(2,i,0))))
    else:
      coordinate_xy.append(0)
      coordinate_xy.append(0)
            
  coordinate_xy_tensor = tf.expand_dims(coordinate_xy, 0)        # Expanding the dimension of the coordinate list
  predict=model.predict(coordinate_xy_tensor)                    # Making an initial prediction using the model
  predict_index=np.argmax(predict[0], axis=0)                    # Generating index out of the prediction
  predicted_class=class_list[predict_index]                      # Tallying the index with class list
  print(predicted_class)
  runQuarky(predicted_class)

 

Note: You can edit the code to add custom code according to your requirement.

Logic

  1. If the identified class from the analyzed image is “forward,” the humanoid will move forward at a specific speed.
  2. If the identified class is “backward,” the humanoid will move backward.
  3. If the identified class is “left,” the humanoid will move left.
  4. If the identified class is “right,” the humanoid will move right.
  5. Otherwise, the humanoid will be in the home position.
if pose.ishanddetected():
      if predicted_class == "forward":
        humanoid.move("forward", 1000, 1)
        
      if predicted_class == "backward":
        humanoid.move("backward", 1000, 1)
        
      if predicted_class == "left":
        humanoid.move("left", 1000, 1)
        
      if predicted_class == "right":
        humanoid.move("right", 1000, 1)
        
      if predicted_class == "stop":
        humanoid.home()
    else:
      quarky.stoprobot()

Output

Read More
Learn how to code logic for video input detection .set the bounding box threshold, and detect signals to control Humanoid movements.

Introduction

A sign detector Humanoid robot is a robot that can recognize and interpret certain signs or signals, such as hand gestures or verbal commands, given by a human. The robot uses sensors, cameras, and machine learning algorithms to detect and understand the sign, and then performs a corresponding action based on the signal detected.

These robots are often used in manufacturing, healthcare, and customer service industries to assist with tasks that require human-like interaction and decision-making.

Code

Logic

  1. Initialise the video on stage and set the transparency as 0%
  2. Show the bounding box and set its threshold to 0.8.
  3. Get the input from the camera forever.
  4. If the signal is detected as ‘Go’ then it displays an ‘up arrow’, sounds go straight and the Humanoid will move 2 steps forward at high speed using do () motion () times at () speed block.
  5. If the signal is detected as ‘TurnRight’ then it displays a ‘right arrow’, sounds Turn right and Humanoid will take a right turn at high speed using do () motion () times at () speed block.
  6. If the signal is detected as ‘TurnLeft’ then it displays a ‘Left arrow’, sounds Turn Left and Humanoid will take a left turn at high speed using do () motion () times at () speed block.
  7. If it detects as stop the display will be cleared in the quirky and the humanoid will be at a home() posture.

Output

Read More
Learn how to code logic for speech recognized control of Mars Rover with this example block code. You will be able to direct your own Mars Rover easily by just speaking commands.

Learn how to code logic for speech recognized control of Mars Rover with this example block code. You will be able to direct your own Mars Rover easily by just speaking commands.

Introduction

A speech recognized controlled Mars Rover robot is a robot that can recognize and interpret our speech, verbal commands, given by a human. The code uses the speech recognition model that will be able to record and analyze your speech given and react accordingly on the Mars Rover.

Speech recognition robots can be used in manufacturing and other industrial settings to control machinery, perform quality control checks, and monitor equipment.

They are also used to help patients with disabilities to communicate with their caregivers, or to provide medication reminders and other health-related information.

Main Code:

Logic

  1. Firstly, the code initializes the Mars Rover pins and starts recording the microphone of the device to store the audio command of the user.
  2. The code then checks conditions whether the command included the word “Go” or not. You can use customized commands and test for different conditions on your own.
  3. If the first condition stands false, the code again checks for different keywords that are included in the command.
  4. When any condition stands true, the robot will align itself accordingly and move in that direction of the respective command.

Output

Forward-Backward Motions:

Right-Left Motions:

Read More
Learn to control Mecanum Gripper Robot using Dabble App on your device with customized functions for different motions and activities.

Introduction

In this activity, we will control the Mecanum Gripper according to our needs using the Dabble application on our own Devices.

We will first understand how to operate Dabble and how to modify our code according to the requirements. The following image is the front page of the Dabble Application.

Select the Gamepad option from the Home Screen and we will then use the same gamepad to control our Mecanum Gripper.

Code

The following blocks represent the different functions that are created to control the Mecanum Gripper for different types of motions. We will use the arrow buttons to control the basic movements.( Forward, Backward, Lateral Left, Lateral Right ). We will use custom functions to control the gripper actions. We will use the Triangle button to close the gripper arms and the Circle button to open the gripper arms. We will use the Cross button to rotate to the right direction and we will use the Square button to rotate to the left direction. We can use the Select button to stop the Mecanum whenever possible.

Note: You can always customize each and every function and button, and make your own activities easily. You will have to add the extensions of Mecanum and also of Dabble to access the blocks. To access the basic extensions required, make sure to select the Board as Quarky first.

Initialization

Main Code

You will have to connect the Quarky with the Dabble Application on your device. Make sure Bluetooth is enabled on the device before connecting. Connect the Mecanum to the Dabble application after uploading the code. You will be able to connect by clicking on the plug option in the Dabble Application as seen below. Select that plug option and you will find your Quarky device. Connect by clicking on the respective Quarky.

Important Notes

  1. The code will only run by uploading the code by connecting the Mecanum with the help of a C-Type Cable to the Laptop.
  2. You will be able to upload the Python Code by selecting the Upload option beside the Stage option.
  3. There may be a case where you will have to upload the firmware first and then upload the code to the Mecanum. You will be able to upload the firmware in Quarky with the help of the following steps:
    1. Select the Quarky Palette from the Block Section.
    2. Select the Settings button on top of the palette.
    3. In the settings dialog box, scroll down, and select the Upload Firmware option. This will help you to reset the Quarky if any previous code was uploaded or not.
  4. After the Firmware is uploaded, click on the “Upload Code” option to upload the code.
  5. You will have to add the block “When Quarky Starts Up” rather than the conventional “When Green Flag is Clicked” for the code to run.

Output

Forward-Backward Motion:

Circular Right-Left Motion:

Lateral Right-Left Motion:

Gripper Mechanism:

Read More
Learn to control Mecanum using Dabble App on your device with customized functions for specialized motions using the Python Interface of the Pictoblox Software.

Introduction

In this activity, we will control the Mecanum Gripper according to our needs using the Dabble application on our own Devices.

We will first understand how to operate Dabble and how to modify our code according to the requirements. The following image is the front page of the Dabble Application.

Select the Gamepad option from the Home Screen and we will then use the same gamepad to control our Mecanum Gripper.

Code

The following blocks represent the different functions that are created to control the Mecanum Gripper for different types of motions. We will use the arrow buttons to control the basic movements.( Forward, Backward, Lateral Left, Lateral Right ). We will use custom functions to control the gripper actions. We will use the Triangle button to close the gripper arms and the Circle button to open the gripper arms. We will use the Cross button to rotate to the right direction and we will use the Square button to rotate to the left direction. We can use the Select button to stop the Mecanum whenever possible.

Note: You can always customize each and every function and button, and make your own activities easily. You will have to add the extensions of Mecanum and also of Dabble to access the functions. To access the basic extensions required, make sure to select the Board as Quarky first. Select the Python Coding Environment and on the top right click on the Upload Mode only for the code to work properly.

from quarky import *
# imported modules
from expansion_addon import Mecanum
import dabble

# User Defined Functions
def Initialization():
	meca.initialisegripper(5)
	meca.setcloseangle(90)
	meca.stoprobot()


meca=Mecanum(1,2,7,8)
gp=dabble.Gamepad()


Initialization()
while True:
	gp.processinput()
	if gp.ispressed("up"):
		meca.runrobot("forward",100)
	
	else:
		pass
		if gp.ispressed("down"):
			meca.runrobot("backward",100)
		
		else:
			pass
			if gp.ispressed("right"):
				meca.runrobot("lateral right",100)
			
			else:
				pass
				if gp.ispressed("left"):
					meca.runrobot("lateral left",100)
				
				else:
					pass
					if gp.ispressed("triangle"):
						meca.closearm()
					
					else:
						pass
						if gp.ispressed("circle"):
							meca.openarm()
						
						else:
							pass
							if gp.ispressed("cross"):
								meca.runrobot("circular right",100,1)
							
							else:
								pass
								if gp.ispressed("square"):
									meca.runrobot("circular left",100,1)
								
								else:
								  pass
								  if gp.ispressed("select"):
								    meca.stoprobot()
									  
								
								  else:
									  pass
									  meca.stoprobot()

You will have to connect the Quarky with the Dabble Application on your device. Make sure Bluetooth is enabled on the device before connecting. Connect the Mecanum to the Dabble application after uploading the code. You will be able to connect by clicking on the plug option in the Dabble Application as seen below. Select that plug option and you will find your Quarky device. Connect by clicking on the respective Quarky.

Important Notes

  1. The code will only run by uploading the code by connecting the Mecanum with the help of a C-Type Cable to the Laptop.
  2. You will be able to upload the Python Code by selecting the Upload option beside the Stage option.
  3. There may be a case where you will have to upload the firmware first and then upload the code to the Mecanum. You will be able to upload the firmware in Quarky with the help of the following steps:
    1. Go to the Block Coding Environment and select the Quarky Palette from the Block Section.
    2. Select the Settings button on top of the palette.
    3. In the settings dialog box, scroll down, and select the Upload Firmware option. This will help you to reset the Quarky if any previous code was uploaded or not.
  4. After the Firmware is uploaded, you can shift to the Python Mode and upload the code you have written. The upload button can be seen to the right section of the terminal as shown below.

Output

Forward-Backward Motion:

Circular Right-Left Motion:

Lateral Right-Left Motion:

Gripper Mechanism:

Read More
Learn how to set the bounding box threshold, and detect signals such as 'Go', 'TurnRight', 'TurnLeft', and 'Stop' to control quadruped movements.

Introduction

Sign detection is being performed using a camera and a RecognitionCards object. The RecognitionCards object is set up with a threshold value and is enabled to draw a box around the detected object. The robot uses sensors, cameras, and machine learning algorithms to detect and understand the sign, and then performs a corresponding action based on the signal detected.

These robots are often used in manufacturing, healthcare, and customer service industries to assist with tasks that require human-like interaction and decision-making.

Code

sprite = Sprite('Tobi')
quarky = Quarky()

import time
quad=Quadruped(4,1,8,5,3,2,7,6)

recocards = RecognitionCards()
recocards.video("on flipped")
recocards.enablebox()
recocards.setthreshold(0.6)
quad.home()
while True:
  recocards.analysecamera()
  sign = recocards.classname()
  sprite.say(sign + ' detected')
  
  if recocards.count() > 0:
    if 'Go' in sign:
      quarky.drawpattern("jjjijjjjjiiijjjiiiiijjjjijjjjjjijjj")
      quad.move("forward",1000,1)
      
      
    if 'Turn Left' in sign:
      quarky.drawpattern("jjjddjjjjjdddjdddddddjjjdddjjjjddjj")
      quad.move("lateral right",1000,1)
      
      
    if 'Turn Right' in sign:
      quarky.drawpattern("jjggjjjjgggjjjgggggggjgggjjjjjggjjj")
      quad.move("lateral left",1000,1)
      
      
    if 'U Turn' in sign:
      quarky.drawpattern("jjjbjjjjjjbjjjjbbbbbjjjbbbjjjjjbjjj")
      quad.move("backward",1000,1)
      
      
    else:
      quad.home()

Logic

  1. This code is using several objects to detect and respond to certain signs or images captured by a camera.
  2. First, it creates a Sprite object with the name ‘Tobi’, and a Quarky object. It also imports a time module.
  3. Next, a Quadruped object is created with some parameters. Then, a RecognitionCards object is created to analyze the camera input. The object is set to enable a box around the detected object and to set the threshold of detection to 0.6.
  4. The code then puts the Quadruped object in its home position and enters an infinite loop.
  5. Within the loop, the code captures the camera input and uses the RecognitionCards object to analyze it. If an object is detected, the object’s class name is retrieved and used by the Sprite object to say that the object was detected.
  6. If the count of detected objects is greater than zero, the code checks if the detected object is a specific sign.
  7. If the object is a ‘Go‘ sign, the Quarky object will draw a specific pattern, and the Quadruped object will move forward.
  8. If the object is a ‘Turn Left‘ sign, the Quarky object will draw a different pattern and the Quadruped object will move to the right.
  9. If the object is a ‘Turn Right‘ sign, the Quarky object will draw another pattern, and the Quadruped object will move to the left.
  10. Finally, if the object is a ‘U Turn‘ sign, the Quarky object will draw a fourth pattern, and the Quadruped object will move backward.
  11. If the detected object is not any of the specific signs, the Quadruped object will return to its home position.
  12. So, this code helps a robot understand hand signs and move in response to them!

Output

Read More
Learn about face-tracking, and how to code a face-tracking Quadruped robot using sensors and computer vision techniques.

Introduction

A face-tracking robot is a type of robot that uses sensors and algorithms to detect and track human faces in real time. The robot’s sensors, such as cameras or infrared sensors, capture images or videos of the surrounding environment and use computer vision techniques to analyze the data and identify human faces. One of the most fascinating activities is face tracking, in which the Quadruped can detect a face and move its head in the same direction as yours. How intriguing it sounds, so let’s get started with the coding for a face-tracking Quadruped robot.

we will learn how to use face detection to control the movement of a Quadruped robot and how to incorporate external inputs into a program to create more interactive and responsive robotics applications.

Logic

  1. If the face is tracked at the center of the stage, the Quadruped should be straight.
  2. As the face moves to the left side, the Quadruped will also move to the left side.
  3. As the face moves to the right side, the Quadruped will also move to the right side.

Code

sprite = Sprite('Tobi')
quarky=Quarky()

import time
import math
quad=Quadruped(4,1,8,5,3,2,7,6)

fd = FaceDetection()
fd.video("on", 0)
fd.enablebox()
fd.setthreshold(0.5)
time.sleep(1)
Angle=0
while True:
  fd.analysestage()
  for i in range(fd.count()):
    sprite.setx(fd.x(i + 1))
    sprite.sety(fd.y(i + 1))
    sprite.setsize(fd.width(i + 1))
    Angle=fd.width(i + 1)
    angle=int(float(Angle))
    if angle>90:
      quad.move("lateral right",1000,1)
    elif angle<90:
      quad.move("lateral left",1000,1)
    else:
      quad.home()

Code Explanation

  1. First, we import libraries and create objects for the time and math.
  2. Next, we set up the camera and enable face detection with a 0.5 threshold.
  3. Based on this information, We use a loop to continuously analyze the camera feed for faces and control the humanoid’s movement.
  4. When a face is detected, the quadruped sprite moves to the face’s location and the angle of the face is used to determine the direction of movement.
  5. The Quadruped moves to the left if the angle is greater than 90 degrees.
  6. The Quadruped moves to the right if the angle is less than 90 degrees.
  7. If the angle is exactly 90 degrees, the Qudruped returns to its original position.

Output

Our next step is to check whether it is working right or not. Whenever your face will come in front of the camera, it should detect it and as you move to the right or left, the head of your  Quadruped robot should also move accordingly.

Read More
The example shows how to use a number classifier in PictoBlox to make the Iris Classifier Bot.

Script

The idea is simple, we’ll add one image of  each class in the “costume” column by making one new sprite which will we display on the stage according to input from user. we’ll also change name of the image according to iris type.

  1. Add iris image as another sprite and upload one image of all iris classes on costume.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. Add an “ask () and wait” block from the Sensing palette. Write an appropriate statement in an empty place.
  5. Add the “set () as ()” block from the Machine Learning palette. Select the SepalLengthCm option at the first empty place, and for the second select an “answer” block from the Sensing palette.
  6. Add an “ask () and wait” block from the Sensing palette. Write an appropriate statement in an empty place.
  7. Add the “set () as ()” block from the Machine Learning palette. Select the SepalWidthCm option at the first empty place, and for the second select an “answer” block from the Sensing palette.
  8. Add an “ask () and wait” block from the Sensing palette. Write an appropriate statement in an empty place.
  9. Add the “set () as ()” block from the Machine Learning palette. Select the PetalLengthCm option at the first empty place, and for the second select an “answer” block from the Sensing palette.
  10. Add an “ask () and wait” block from the Sensing palette. Write an appropriate statement in an empty place.
  11. Add the “set () as ()” block from the Machine Learning palette. Select the PetalWidthCm option at the first empty place, and for the second select an “answer” block from the Sensing palette.
  12. Add an “analyse numbers” block from the Machine Learning palette.
  13. Add the “if () then” block from the control palette for checking the user’s input.
  14. In the empty place of the “if () then” block, add an “is identified class ()” block from the Machine Learning palette. Select the appropriate class from the options.
  15. Inside the “if () then” block, add an “say ()” block from the Looks palette block. Write an appropriate statement in an empty place.
  16. Inside the “if () then” block, add a “broadcast ()” block from the Events palette block. Select the “New message” option and write an appropriate statement for broadcasting a message to another sprite.
  17. Repeat “if () then” block code for other iris classes, make appropriate changes in copying block code according to other classes, and add code just below it.
  18. Final code of “Tobi” sprite is
  19. Now click on another sprite and write code.
  20. We’ll start writing code for this sprite by adding a when flag is clicked block from the Events palette.
  21. Add the “hide” block from the Looks pallet.
  22. Write a new code in the same sprite according to class and add the “when I receive ()” block from the Events palette. Select the appropriate class from the options.
  23. Add the “show” block from the Looks pallet.
  24. Add the “switch costume to ()” block from the Looks palette. Select the appropriate class from the options.
  25. Repeat the same code for other classes and make changes according to the class.

    Final Result

Read More
Discover the capabilities of pick-and-place robotic arms, mechanical systems designed to efficiently pick up objects from one location and precisely place them in another.

Introduction

A pick-and-place robotic arm is a mechanical system designed to perform the task of picking up objects from one location and placing them in another. It consists of multiple segments connected, similar to a human arm, and is equipped with motors, sensors, and grippers.

The robotic arm is programmed to move in a precise and controlled manner. Various input methods, such as a computer interface or remote control can guide it. The arm uses its grippers to grasp objects securely, and then it can move them to a different location.

Pick-and-place robotic arms are commonly used in industries such as manufacturing, logistics, and assembly lines. They automate repetitive tasks that involve moving objects, saving time and reducing the risk of human error. With accuracy and efficiency, these robotic arms can handle a wide range of objects, from small components to larger items.

Code

sprite = Sprite('Tobi')
roboticArm = RoboticArm(1,2,3,4)

roboticArm.calibrate(0, 0, 0)
roboticArm.setoffset(0,0)
roboticArm.setgripperangle(0,50)
roboticArm.sethome()
roboticArm.gripperaction("open")
roboticArm.movetoxyz(100,200,25,1000)
roboticArm.gripperaction("close")
roboticArm.movetoxyz(80,200,70,1000)
roboticArm.movetoxyz(-100,200,70,1000)
roboticArm.gotoinoneaxis(25,"Z",1000)
roboticArm.gripperaction("open")
roboticArm.movetoxyz(-100,200,100,1000)
roboticArm.gripperaction("close")
roboticArm.sethome()

Logic

  1. Open the Pictoblox application.
  2. Select the block-based environment.
  3. Click on the robotic arm extension available in the left corner.
  4. A robotic arm object named roboticArm is created using the RoboticArm class, and four numbers (1, 2, 3, and 4) are passed as arguments. These numbers represent the initial settings or parameters of the robotic arm.
  5. The calibrating method of the roboticArm object is called, passing three arguments 0, 0, and 0. This method is used to calibrate the robotic arm using calibrate(0, 0, 0).
  6. The set offset method of the roboticArm object is called, passing two arguments 0 and 0. This method sets an offset value for the robotic arm using setoffset(0,0).
  7. The setgripperangle method of the roboticArm object is called with two arguments 0 and 50. This method is used to set the angle of the gripper on the robotic arm, using setgripperangle(0,50).
  8. The sethome method of the roboticArm object is called. This method probably sets the current position of the robotic arm as the home position using sethome().
  9. The gripperaction method of the roboticArm object is called with the argument “open“. This method controls the gripper on the robotic arm to open using gripperaction(“open”) then the gripper of arm will be open.
  10. The movetoxyz method of the roboticArm object is called with four arguments 100, 200, 25, and 1000. This method moves the robotic arm to the specified x, y, and z coordinates over a duration of 1000 milliseconds using movetoxyz(100,200,25,1000) .
  11. The gripperaction method of the roboticArm object is called with the argument “close“. This method controls the gripper on the robotic arm to close using gripperaction(“close”).it will pick up the object from its place.
  12. The movetoxyz method of the roboticArm object is called with four arguments 80, 200, 70, and 1000. This moves the robotic arm to a new set of coordinates using movetoxyz(80,200,70,1000).
  13. The movetoxyz method of the roboticArm object is called with four arguments-100, 200, 70, and 1000. This moves the robotic arm to another set of coordinates using movetoxyz(-100,200,70,1000).
  14. The gotoinoneaxis method of the roboticArm object is called with three arguments 25, “Z”, and 1000. This moves the robotic arm to a specific position along the Z-axis using gotoinoneaxis(25, “Z”,1000).
  15. Then the gripper will be opened. it will put an object in its place.
  16. The movetoxyz method of the roboticArm object is called with four arguments-100, 200, 70, and 1000. This moves the robotic arm to another set of coordinates using movetoxyz(-100,200,70,1000)
  17. The gripperaction method of the roboticArm object is called with the argument “close“. This method controls the gripper on the robotic arm to close using gripperaction(“close”) it will close arm of the gripper.
  18. Press Run to run the code.

Output

Read More
This project features a Three IR Line Following Robot with Quarky, using external IR sensors and simple logic to follow a line.

Steps

  1. Connect the External IR module digital pin with the quarky 
  2. Set IR threshold using a potentiometer on the module for line following and stopping the robot at the crossing lines
  3. The below example uses without-pid line following blocks
  4. When you click do line following robot, start line following and stop at the check-point (when both IRs are at the Black line).

Script

Read More
In this example, we will demonstrate how to control the door of the IoT House.

In this example, we will demonstrate how to control the door of the IoT House.

Circuit

Connect the servo motor to the Quarky Expansion Board servo pin 5.

Door Control

The door of the IoT House is controlled with a servo motor. You need to make the servo motor set to 0 angles before assembling the door. You can do it with the following script.

Door Control Script

The following script makes the door closed by default and opens it for 1 second when the space key is pressed.

Output

Press the space key to make the door open.

Read More
Learn how to connect the MQ2 gas sensor to Adafruit IO and create an air pollution monitoring system. Understand the sensitivity characteristics of the MQ2 gas sensor and how to calculate the PPM of the gas.

The project demonstrates how to interface the gas sensor to the Quarky and get the PPM (Parts Per Million) reading. Later, we will create an air pollution monitoring system on Adafruit IO.

Adafruit IO – Creating Gas Monitoring Dashboard

We will be using Adafruit IO for creating a switch on the cloud. Follow the instructions:

  1. Create a new Feed named Gas Sensor.
  2. Create a new Dashboard named Sensor Monitoring.
  3. Edit the Dashboard and add a Gauge Block.
  4. Connect the Gas Sensor feed to the block and click on Next Step.
  5. Edit the Block Setting and click on Create Block.
  6. Block is added.

Circuit of Gas Sensor

The relay has the following connections:

  1. GND Pin connected to GND of the Quarky Expansion Board.
  2. VCC Pin connected to VCC of the Quarky Expansion Board.
  3. AO (Signal Pin) connected to Analog Pin A1 of the Quarky Expansion Board.

All About MQ2 Gas Sensor

Gas sensors are designed to measure the concentration of gases in the environment. MQ2 gas sensor is suitable for detecting H2, LPG, CH4, CO, Alcohol, Smoke or Propane. Due to its high sensitivity and fast response time, measurements can be taken as soon as possible.

Note:  The sensor value only reflects the approximated trend of gas concentration in a permissible error range, it DOES NOT represent the exact gas concentration. The detection of certain components in the air usually requires a more precise and costly instrument, which cannot be done with a single gas sensor.

MQ-2 Gas Sensor Sensitivity Characteristics:

The graph tells us the concentration of a gas in part per million (ppm) according to the resistance ratio of the sensor (RS/R0).

  1. RS is the resistance of the sensor that changes depending on the concentration of gas.
  2. R0 is the resistance of the sensor at a known concentration without the presence of other gases, or in the fresh air.

For air, RS/R0 = 9.8 for the MQ2 gas sensor.

Note:  According to the graph, we can see that the minimum concentration we can test is 100ppm and the maximum is 10000 ppm, in another word, we can get a concentration of gas between 0.01% and 1%.

Calculation of R0 for the Sensor

RS = [(Vin x RL) / Vout] - RL
  1. Vin is 5V in our case.
  2. RL is 10 kOhm
  3. Vout is the analog voltage reading from the sensor

We can simplify the above formula by omitting RL:

RS = (Vin - Vout) / Vout

From the graph, we can see that the resistance ratio in fresh air is constant:

RS / R0 = 9.8

To calculate R0 we will need to find the value of the RS in the fresh air using the above formula. This will be done by taking the analog average readings from the sensor and converting it to voltage. Then we will use the RS formula to find R0.

R0 = RS / 9.8

Calculating PPM for a particular gas

Let’s analyze the graph:

  1. The scale of the graph is log-log. This means that on a linear scale, the behavior of the gas concentration with respect to the resistance ratio is exponential.
  2. The data for gas concentration only ranges from 200 ppm to 10000 ppm.
  3. Even though the relation between resistance ratio and gas concentration may seem linear, in reality, it is not.

First of all, we will treat the lines as if they were linear. This way we can use one formula that linearly relates the ratio and the concentration. By doing so, we can find the concentration of a gas at any ratio value even outside of the graph’s boundaries. The formula we will be using is the equation for a line, but for a log-log scale. The formula for a line is:

  y = mx + b

Where:

y: X value 
x: X value 
m: Slope of the line 
b: Y intercept

For a log-log scale, the formula looks like this:

  log(y) = m*log(x) + b
Note:  The log is base 10.

Continue writing text from here.

Okay, let’s find the slope. To do so, we need to choose 2 points from the graph.

In our case, we chose the points (200,1.6) and (10000,0.27) from the LPG line. The formula to calculate m is the following:

m = [log(y) - log(y0)] / [log(x) - log(x0)]

If we apply the logarithmic quotient rule we get the following:

m = log(y/y0) / log(x/x0)

Now we substitute the values for x, x0, y, and y0:

m = log(0.27/1.6) / log(10000/200)
m = -0.473

Now that we have m, we can calculate the y-intercept. To do so, we need to choose one point from the graph (once again from the LPG line). In our case, we chose (5000,0.46)

log(y) = m*log(x) + b
b = log(y) - m*log(x)
b = log(0.46) - (-0.473)*log(5000)
b = 1.413

Now that we have m and b, we can find the gas concentration for any ratio with the following formula:

log(x) = [log(y) - b] / m

However, in order to get the real value of the gas concentration according to the log-log plot we need to find the inverse log of x:

x = 10 ^ {[log(y) - b] / m}

In the table given below, you can find the value of m and b for different gases.

Code for Stage Mode

There are two steps to calculating PPM for the gas:

  1. First, we will calculate the value of R0. To calculate R0 we need to find out the value of Rs in the fresh air, this will be done by taking analog average readings from the sensor and converting it into the corresponding voltage value, then we will use the above formula to calculate R0, wait until we get the stable value of R0. Make this script in the PictoBlox to get the value of R0.
  2. After that, we will use the above-calculated value of R0 to find out the concentration of gases in ppm and send it to the cloud.
#Importing the time and math modules to use later on in the code.
import time
import math

#Creating a Quarky object called 'quarky'.
quarky = Quarky()

#Creating an IoTHouse object called 'house' and an AdaIO object called 'adaio'.
house = IoTHouse()
adaio = AdaIO()

#Connecting the AdaIO object to Adafruit IO using a username and key.
adaio.connecttoadafruitio("STEMNerd", "aio_UZBB56f7VTIDWyIyHX1BCEO1kWEd")

#Initializing Sensor_Value to 0.1
Sensor_Value = 0.1

#Looping through 20 times to get the Sensor_Value
for i in range(0, 20):
  Sensor_Value += house.ldrvalue("A1")

#Getting the average of the Sensor_Value
Sensor_Value = (Sensor_Value / 20)

#Getting the RS_of_Air from the Sensor_Value
RS_of_Air = ((4095 - Sensor_Value) / Sensor_Value)

#Getting the R0 from the RS_of_Air
R0 = (RS_of_Air / 9.8)

#Making the program wait for 1 second
time.sleep(1)

#Initializing b to 1.413 and m to -0.473
b = 1.413
m = -0.473

#A loop that will run forever
while True:
  #Getting the Sensor_Value from the house
  Sensor_Value = house.ldrvalue("A1")

  #Making sure that Sensor_Value is not equal to 0
  if Sensor_Value != 0:
    #Getting the RS_of_Air from the Sensor_Value
    RS_of_Air = ((4095 - Sensor_Value) / Sensor_Value)

    #Getting the RS_RO_Ratio from the RS_of_Air and R0
    RS_RO_Ratio = (RS_of_Air / R0)

    #Getting the PPM_in_Log from the RS_RO_Ratio, b and m
    PPM_in_Log = (((math.log(RS_RO_Ratio)) - b) / m)

    #Getting the PPM from the PPM_in_Log
    PPM = (pow(10, PPM_in_Log))

    #Creating data with the AdaIO object called 'gas-sensor'
    adaio.createdata("gas-sensor", PPM)

  #Making the program wait for 2 seconds
  time.sleep(2)
Note:  If you want to detect other gases, change the value of b and m in the program according to the sensor from the table.

Output

Read More
Learn how to create custom sounds to control Mars Rover with the Audio Classifier of the Machine Learning Environment in PictoBlox. Start building your Sound Based Controlled Mars Rover now!

In this activity, we will use the Machine Learning Environment of the Pictoblox Software. We will use the Audio Classifier of the Machine Learning Environment and create our custom sounds to control the Mars Rover.

Audio Classifier Workflow

Follow the steps below to create your own Audio Classifier Model:

  1. Open PictoBlox and create a new file.
  2. Select the Block coding environment as appropriate Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. Click on “Create New Project“.
  5. A new window will open. Type in an appropriate project name of your choice and select the “Audio Classifier” extension. Click the “Create Project” button to open the Audio Classifier Window.
  6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.
  7. As you can observe in the above image, we will add two classes for audio. We will be able to add audio samples with the help of the microphone. Rename the class1 as “Clap” and class2 as “Snap”.

Note: You can add more classes to the projects using the Add Class button.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Microphone.
  3. You will be able to add the audio sample in each class and make sure you add atleast 20 samples for the model to run with good accuracy.
  4. Add the first class as “clap”  and record the audio for clap noises through the microphone.
  5. Add the second class as “snap” and record the audio for snap noises through the microphone.

Note: You will only be able to change the class name in the starting before adding any audio samples. You will not be able to change the class name after adding the audio samples in the respective class.

Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model simply, use the microphone directly and check the classes as shown in the below image:

You will be able to test the difference in audio samples recorded from the microphone as shown below:

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

 

Logic

The Mars Rover will move according to the following logic:

  1. When the audio is identified as “clap”- Mars Rover will move forward.
  2. When the “snap” sound is detected –Mars Rover will move backward.

Note: You can add even more classes with different types of differentiating sounds to customize your control. This is just a small example from which you can build your own Sound Based Controlled Mars Rover in a very easy stepwise procedure.

 

Code

 

Logic

  1. First  we will initialize different Audio classes.
  2. Then, we will open the recognition window, which will identify different audio and turn on the microphone to identify and record the audio from the microphone.
  3. If the identified class from the analyzed audio is “clap,” the Mars Rover will move forward at a specific speed.
  4. If the identified class is “snap,” the Mars Rover will move backward.

Output

Read More
Learn about how hand gestures and motions can be translated into commands that control the movement of objects.

Introduction

The hand-controlled motion refers to the ability to control the movement of an object using hand gestures or motions. This can be accomplished through the use of various technologies, such as sensors or motion tracking devices, that detect the movements of the hand and translate them into commands that control the motion of the object.

Hand-controlled motion has a wide range of applications, including in virtual reality and gaming, robotics, prosthetics, and assistive technologies for individuals with disabilities. By allowing for intuitive and natural control of motion, hand-controlled motion can enhance the user’s experience and increase their ability to interact with and manipulate the world around them.

Code

Logic

  1. Begin by initializing the Humanoid extension.
  2. Set specific values for the speed, left-hand offset, left-hand amplifier, period, and phase variable using set() to ().
  3. Then use a forever loop to continuously execute the necessary tasks.
  4. Furthermore, utilize the repeat until loop to repeat the tasks until a specific period has passed.
  5. Calculate a specific angle to set the current position, then position the right hand accordingly to start oscillating from that angle using the set() to () block.
  6. Then apply similar mathematical calculations and set the left hand to the same angle.
  7. Finally, Both hands will move by the calculations due to the forever loop.

Output

Read More
In this activity, learn how to create a new Machine Learning model that will be able to identify and detect different types of hand poses and that can help us to control the Mecanum Pick and Place Robot.

In this activity, we will try to create a new Machine Learning model that will be able to identify and detect different types of hand poses and that can help us to control the Mecanum Pick and Place Robot. This activity can be quite fun and by knowing the process, you can develop your own customized hand pose classifier model easily!

We will use the same model that we have created in the previous Hand Pose Controlled Mecanum model to avoid any misdirection and confusion.

Note: You can always create your own model and use it to perform any type of functions as per your choice. This example proves the same point and helps you understand well the concept of Machine Learning models and environment.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the Block coding environment as appropriate Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.
Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.
Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Logic

The mecanum will move according to the following logic:

  1. If the detected class is “forward”, we will make the Mecanum move forward.
  2. When the backward gesture is detected – Mecanum will move backwards.
  3. When the Lateral Left gesture is detected – Mecanum will move towards the left direction laterally with the help of its omnidirectional wheels.
  4. When the Lateral Right gesture is detected – Mecanum will move towards the right direction laterally with the help of its omnidirectional wheels.
  5. When the Normal Right gesture is detected – Mecanum will rotate on a single point towards the right direction.
  6. When the Normal Left gesture is detected – Mecanum will rotate on a single point towards the left direction.
  7. When the Stop gesture is detected – Mecanum will stop and initiate the Pick Mechanism using the Pick function.
  8. When the Circular Motion gesture is detected – Mecanum will initiate the Place Mechanism using the Place function.

Code

Initialization

Main Code

Output

Forward-Backward Motion:

Circular Right-Left Motion:

Lateral Right-Left Motion:

Pick and Place Mechanism with Hand Pose:

Read More
In this activity, learn how to create a new Machine Learning model that will be able to identify and detect different types of hand poses and that can help us to control the Mecanum Pick and Place Robot.

In this activity, we will try to create a new Machine Learning model that will be able to identify and detect different types of hand poses and that can help us to control the Mecanum Pick and Place Robot. This activity can be quite fun and by knowing the process, you can develop your own customized hand pose classifier model easily!

We will use the same model that we have created in the previous Hand Pose Controlled Mecanum model to avoid any misdirection and confusion.

Note: You can always create your own model and use it to perform any type of functions as per your choice. This example proves the same point and helps you understand well the concept of Machine Learning models and environment.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the Python coding environment as appropriate Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.

Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Python Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Python Coding Environment if you have opened the ML Environment in the Python Coding.

Logic

The mecanum will move according to the following logic:

  1. If the detected class is “forward”, we will make the Mecanum move forward.
  2. When the backward gesture is detected – Mecanum will move backwards.
  3. When the Lateral Left gesture is detected – Mecanum will move towards the left direction laterally with the help of its omnidirectional wheels.
  4. When the Lateral Right gesture is detected – Mecanum will move towards the right direction laterally with the help of its omnidirectional wheels.
  5. When the Normal Right gesture is detected – Mecanum will rotate on a single point towards the right direction.
  6. When the Normal Left gesture is detected – Mecanum will rotate on a single point towards the left direction.
  7. When the Stop gesture is detected – Mecanum will stop and initiate the Pick Mechanism using the Pick function.
  8. When the Circular Motion gesture is detected – Mecanum will initiate the Place Mechanism using the Place function.

Code

Logical Code

meca=Mecanum(1,2,7,8)
meca.initialisepickplace(5,4)
meca.setpickangle(40)
meca.setarmangle(90)

def runmecanum(predicted_class):
  if pose.ishanddetected():
    if predicted_class=="Forward":
      meca.runtimedrobot("forward",100,2)
    if predicted_class=="Backward":
      meca.runtimedrobot("backward",100,2)
    if predicted_class=="Stop":
      meca.pick()
    if predicted_class=="LateralRight":
      meca.runtimedrobot("lateral right",100,2)
    if predicted_class=="LateralLeft":
      meca.runtimedrobot("lateral left",100,2)
    if predicted_class=="NormalRight":
      meca.runtimedrobot("circular right",100,1)
    if predicted_class=="NormalLeft":
      meca.runtimedrobot("circular left",100,1)
    if predicted_class=="CircularMotion":
      meca.place()

Final Code

  1. We will create a custom function as shown above that will help us to control the Pick and Place Mecanum robot easily with the help of the Machine Learning model that we created.
  2. We will also initialize the Pick and Place servo pins and the mecanum pins before writing the main function. We can also set the servo angle for Picking and placing functions with the help of “setpickangle()” function.
  3. We will also have to call the function at the last line of the code where we will call the function that we have created for the code to run properly.

####################imports####################
# Do not change

import numpy as np
import tensorflow as tf
import time

# Do not change
####################imports####################

#Following are the model and video capture configurations
# Do not change

model=tf.keras.models.load_model(
    "num_model.h5",
    custom_objects=None,
    compile=True,
    options=None)
pose = Posenet()                                                    # Initializing Posenet
pose.enablebox()                                                    # Enabling video capture box
pose.video("on",0)                                                  # Taking video input
class_list=['Forward','Backward','Stop','LateralRight','LateralLeft','NormalRight','NormalLeft','CircularMotion']                  # List of all the classes
meca=Mecanum(1,2,7,8)
meca.initialisepickplace(5,4)
meca.setpickangle(40)
meca.setarmangle(90)

def runmecanum(predicted_class):
  if pose.ishanddetected():
    if predicted_class=="Forward":
      meca.runtimedrobot("forward",100,2)
    if predicted_class=="Backward":
      meca.runtimedrobot("backward",100,2)
    if predicted_class=="Stop":
      meca.pick()
    if predicted_class=="LateralRight":
      meca.runtimedrobot("lateral right",100,2)
    if predicted_class=="LateralLeft":
      meca.runtimedrobot("lateral left",100,2)
    if predicted_class=="NormalRight":
      meca.runtimedrobot("circular right",100,1)
    if predicted_class=="NormalLeft":
      meca.runtimedrobot("circular left",100,1)
    if predicted_class=="CircularMotion":
      meca.place()

# Do not change
###############################################

#This is the while loop block, computations happen here
# Do not change

while True:
  pose.analysehand()                                             # Using Posenet to analyse hand pose
  coordinate_xy=[]
    
    # for loop to iterate through 21 points of recognition
  for i in range(21):
    if(pose.gethandposition(1,i,0)!="NULL"  or pose.gethandposition(2,i,0)!="NULL"):
      coordinate_xy.append(int(240+float(pose.gethandposition(1,i,0))))
      coordinate_xy.append(int(180-float(pose.gethandposition(2,i,0))))
    else:
      coordinate_xy.append(0)
      coordinate_xy.append(0)
            
  coordinate_xy_tensor = tf.expand_dims(coordinate_xy, 0)        # Expanding the dimension of the coordinate list
  predict=model.predict(coordinate_xy_tensor)                    # Making an initial prediction using the model
  predict_index=np.argmax(predict[0], axis=0)                    # Generating index out of the prediction
  predicted_class=class_list[predict_index]                      # Tallying the index with class list
  print(predicted_class)
  runmecanum(predicted_class)
  # Do not change

Output

Forward-Backward Motion:

Circular Right-Left Motion:

Lateral Right-Left Motion:

Pick and Place Mechanism with Hand Pose:

Read More
The example shows how to use a number classifier in PictoBlox to make the Iris Classifier Bot.

Code

####################imports####################

import numpy as np
import tensorflow as tf
sprite=Sprite('Tobi')
sprite1 = Sprite('Iris-versicolor')


sprite1.hide()

####################imports####################

model= tf.keras.models.load_model(
    "num_model.h5", 
    custom_objects=None, 
    compile=True, 
    options=None)
SepalLengthCm= int(sprite.input("Enter Sepal Length"))
SepalWidthCm=int(sprite.input("Enter Sepal Width"))
PetalLengthCm=int(sprite.input("Enter Petal Length"))
PetalWidthCm=int(sprite.input("Enter Petal Width"))

class_list = ['Iris-versicolor','Iris-virginica','Iris-setosa',]                                     # List of all the classes

inputValue=[SepalLengthCm,SepalWidthCm,PetalLengthCm,PetalWidthCm,]       # Input List
inputTensor = tf.expand_dims(inputValue, 0)                 # Input Tensor

predict = model.predict(inputTensor)                        # Making an initial prediction using the model
predict_index = np.argmax(predict[0], axis=0)               # Generating index out of the prediction
predicted_class = class_list[predict_index]                 # Tallying the index with class list
sprite.say(predicted_class)
sprite1.show()
sprite1.switchcostume(predicted_class)

Logic

The example demonstrates how to count nuts and bolts from an image of a stage. Following are the key steps happening:

  1. Creates a sprite object named “Tobi”. A sprite is typically a graphical element that can be animated or displayed on a screen.
  2. Creates another sprite by uploading an iris image from the computer; add another image of a different type of iris by clicking at costume place.
  3. Initialize a new sprite variable in the “Tobi” script and hide another sprite initially.
    sprite1 = Sprite('Iris-versicolor') 
    sprite1.hide()
  4. Write a code for taking input of sepal and petal length and width from the user and storing all of this in a new variable.
    SepalLengthCm= int(sprite.input("Enter Sepal Length")) 
    SepalWidthCm=int(sprite.input("Enter Sepal Width")) 
    PetalLengthCm=int(sprite.input("Enter Petal Length")) 
    PetalWidthCm=int(sprite.input("Enter Petal Width"))
  5. Write a predefined function (sprite.say()) by which ‘Tobi’ will say the name of the predicted class.
  6. Now show the hidden sprite by calling a predefined function.
    sprite1.show()
  7. Also write a predefined function by which images will switch according to predicted class.
    sprite1.switchcostume(predicted_class)

Final Result

Read More
Discover the capabilities of a sign detector robotic arm, a smart robot that can recognize and understand signs or signals in its environment.

Introduction

A sign detector robotic arm is a smart robot that can recognize and understand signs or signals in its surroundings. It uses cameras and other sensors to capture visual information and computer algorithms to analyze the signs. The robot can learn different types of signs through machine learning techniques. Once a sign is identified, the robotic arm can perform specific actions based on what the sign means. These robotic arms have many uses, such as helping in healthcare, manufacturing, transportation, and assisting people with communication disabilities. They are an exciting advancement in human-robot interaction, allowing robots to understand and respond to signs, expanding their abilities and applications.

Code

Logic

  1. Open the Pictoblox application.
  2. Select the block-based environment.
  3. Click on the Recognition Cards and robotic arm extension available in the left corner.
  4. Initialize the video on stage and set the transparency as 0%.
  5. Drag and drop the forever loop to continue to initialize the image from the stage. get the input from the camera.
  6. Show the bounding box around the sign detected from the stage.
  7. If the signal is detected as ‘TurnRight’ then the arm will take a right turn 10 degrees Right in x direction using and move() in () x-axis in ()ms block.
  8. If the signal is detected as ‘TurnLeft’ then it displays the arm will take a left turn using 10 degrees Left in x direction using and move() in () x-axis in ()ms block.
  9. If the signal is detected as ‘Go’ and the arm will move 1o degree forward in y direction using a move() in ()axis in ()ms block.
  10. If the signal is detected as a ‘U Turn’ and the arm will move 1o degree back in y direction using a move() in ()axis in ()ms block.
  11. If it detects as stop the display then the arm will be at a home() position.
  12. Press Green Flag to run the code.

Output

Read More
Discover the capabilities of a sign detector robotic arm, a smart robot that can recognize and understand signs or signals in its environment.

Introduction

A sign detector robotic arm is a smart robot that can recognize and understand signs or signals in its surroundings. It uses cameras and other sensors to capture visual information and computer algorithms to analyze the signs. The robot can learn different types of signs through machine learning techniques. Once a sign is identified, the robotic arm can perform specific actions based on what the sign means. These robotic arms have many uses, such as helping in healthcare, manufacturing, transportation, and assisting people with communication disabilities. They are an exciting advancement in human-robot interaction, allowing robots to understand and respond to signs, expanding their abilities and applications.

Code

Logic

  1. Open the Pictoblox application.
  2. Select the block-based environment.
  3. Click on the Recognition Cards and robotic arm extension available in the left corner.
  4. Initialize the video on stage and set the transparency as 0%.
  5. Drag and drop the forever loop to continue to initialize the image from the stage, and get the input from the camera.
  6. Show the bounding box around the sign detected from the stage.
  7. If the signal is detected as ‘TurnRight’ then the arm will take a right turn 10 degrees Right in x direction using and move() in () x-axis in ()ms block.
  8. If the signal is detected as ‘TurnLeft’ then it displays the arm will take a left turn using 10 degrees Left in x direction using and move() in () x-axis in ()ms block.
  9. If the signal is detected as ‘Go’ and the arm will move 1o degree forward in y direction using a move() in () axis in ()ms block.
  10. If the signal is detected as a ‘U Turn’ and the arm will move 1o degree back in y direction using a move() in () axis in ()ms block.
  11. If it detects as “stop” the display then the arm will be at a home() position.
  12. Press Green Flag to run the code.

Output

Read More
Delve into the world of 7 segment displays, an arrangement of seven LEDs forming the shape of the number eight

7 Segment Display

The 7 segment display is a compact arrangement of seven LEDs, creatively forming the shape of the number eight. Often, the display contains an eighth LED with a dot, functioning as a decimal point. Each LED can be controlled individually, enabling the formation of any desired number. By understanding the labeling of LEDs (A to G) and the dot LED (DP), we gain full control over this display module.

.

 

There are two types of 7-segment displays available: common cathode and common anode.

Circuit Diagram

Code

 

  1. Change the mode from “stage” to “upload.
  2. Add the “when Arduino starts up” block from the Arduino palette.
  3. Define the connection of the 7 segment display from the display palette.
  4. Add this block along with the two previous blocks, and input any 4-digit number you want to display on the 7 segment display.
  5. The first block marked in blue represents the data or number you want to display (up to 4 digits in length). The second block defines the length of the number you’ve entered, and the third block marked in blue specifies the starting position of the number. This allows you to choose which display will show the data, especially useful when multiple displays are connected. Additionally, you can decide whether to display colons, count leading zeros, and more.
  6.  We can also choose whether we want to show colons or just or do we need t count leading zeros or not. 
  7. Add this block with the above two blocks and input any 4-digit number
  8. Now click on upload code to upload the code on the Arduino and test the code.

Script

Output

Read More
Learn how to interface a joystick with Quarky, the versatile microcontroller, to control the movement of Quarky Robot.
Introduction

A joystick is an input device used to control the movement or actions of a computer, video game console, or other electronic device. In gaming, joysticks are often used to control the movement of characters or vehicles in a virtual environment. They provide analog input, meaning the degree of movement can vary based on how far the handle is pushed in a particular direction. In aviation and flight simulation, joysticks are commonly used to simulate the control of aircraft, providing pitch, roll, and yaw inputs. Some advanced joysticks also come with additional features such as throttle controls, programmable buttons, and force feedback to enhance the gaming or simulation experience. below is a simple animation of a joystick.

 

 

In this example, we’ll be interfacing a joystick with Quarky and try to read the values of the joystick along the x and y axis let’s begin!!

Circuit Diagram

 

 

connection

JoyStick       Quarky

GND                          GND

5V                                  V

VarX                             A1

VarY                              A2

 

Code

  1. Connect JoyStick as per the above connections.
  2. Open pictoblx and create a new file.
  3. Select Quarky from the board menu.
  4. From events palette drag when the” flag clicked “block into the scripting area.
  5. Now add a “forever ” loop from controls palette
  6. A say() block from the looks palette
  7. from the sensor palette of Quarky add “read analog sensor () at pin()”  in place of hello. and select “joystick X” at pin A1 from the dropdown. 

Now run the code, with this simple script, you will be able to read the value of the joystick along for X-axis in forward, backward, left, and right. with these values you will be able to set the logic on different values for fixing the direction as forward, backward, etc. do same for finding values for Y-axis.

Task for you.

Try to print the value of both the axis together

Output

Read More
Discover how the Quadruped robot can detect and respond to the presence of a hand in its environment.

The project demonstrates how to make the Quadruped detect the hand in front of it and move according.

Type 1 – Forward Backward

The logic is simple. If the distance measured from the ultrasonic sensor is less the robot will move toward the hand. Else the robot will lean backward.

Code


Type 2 – Upside Down

If the distance measured from the ultrasonic sensor is less the robot will face upwards towards the hand. Else the robot will look downward.

Code


Read More
The project uses face recognition to identify authorized people and opens the door accordingly.

The project uses face recognition to identify authorized people and opens the door accordingly.

Circuit

We are using 2 devices in this project:

  1. IR Sensor: The IR sensor provides information if there is an obstacle in front or not. The IR sensor connections are as follows:
    1. GND Pin connected to GND of the Quarky Expansion Board.
    2. VCC Pin connected to VCC of the Quarky Expansion Board.
    3. Signal Pin connected to D3 of the Quarky Expansion Board.
  2. Servo Motor: The servo motor controls the Door of the IoT house which is connected to the servo port of 5 of the Quarky Expansion Board.

Alert: Make sure you have the Door Servo Motor calibrated.

Face Recognition

We will be using Face Detection extension for making the face recognition application.

Working of an IR Sensor

An Infrared sensor is a type of sensor that senses if something is close to it or not. The IR stands for Infrared sensor. Infrared is the light out of our visible spectrum.

An IR sensor has a white LED (transmitter) and a photodiode (receiver). The transmitter emits IR light, and the receiver detects reflected light from objects within the sensor’s range, which can be adjusted with a potentiometer. The sensor is indicated by two LED indicators, a power LED which is always on, and a signal LED which is on when an object is detected and off when nothing is detected.

The signal LED has two states or situations:

  1. ON (Active) when it detects an object
  2. OFF (Inactive) when it doesn’t detect any object

Storing the Face Authorised for IoT House

This script allows us to add a new face to the system. First, the video feed from the camera is turned on. Then, the camera is analyzed for a face. If one face has been detected, the user is asked to select a slot (1 to 10) and enter a name for the new face which is then added to the system. Finally, the video feed from the camera is turned off.

Code

This code creates a program that can add a new face to the system, and then recognize and authenticate the user:

  1. The program sets the threshold for face detection to 0.5, turns off the video feed from the camera, and enables the box to be drawn around the detected face.
  2. It also moves a servo on the expansion board to position 5 and moves it to 100 degrees to close the door.
  3. It defines custom blocks called Run Authorization Check.
  4. The Run Authorization Check block turns on the video feed from the camera, recognizes the face in the camera, and speaks out the name of the recognized user if the face has been recognized. It then returns 1 to indicate the user has been authenticated.
  5. The program then keeps running the loop forever. 
  6. It also checks if the IR sensor is active and if yes, calls the Run Authorization Check function. If the user has been authenticated, it moves the servo to 0 degrees to open the door and then back to 100 degrees to close the door after some time.

Output

Read More
Learn how to control the door of the IoT House with Python. Follow this step-by-step guide to control the servo motor and make the door open and close when you press the space key.

In this example, we will demonstrate how to control the door of the IoT House.

Circuit

Connect the servo motor to the Quarky Expansion Board servo pin 5.

Door Control

The door of the IoT House is controlled with a servo motor. You need to make the servo motor set to 0 angles before assembling the door. You can do it with the following code.

#Creating two objects called "quarky" and "expansion"
quarky = Quarky()

# The "expansion" object is now set to the "Expansion" class
expansion = Expansion()

# We are using the "moveservo" method from the "Expansion" class to make the servo motor 5 be set at 0-degree 
expansion.moveservo(5, 0)

Door Control Python Code

The following script makes the door closed by default and opens it for 1 second when the space key is pressed.

import time

sprite = Sprite('Tobi') # create a sprite object called 'Tobi'
quarky = Quarky() # create a Quarky object
expansion = Expansion() # create an Expansion object

expansion.moveservo(5,100); # move the servo on pin 5 to position 100

while True: # loop forever
	if sprite.iskeypressed("space"): # if the spacebar is pressed
		expansion.moveservo(5,0); # move the servo on pin 5 to position 0
		time.sleep(1) # wait for 1 second
		expansion.moveservo(5,100); # move the servo on pin 5 to position 100

Output

Press the space key to make the door open.

Read More
Explore the functionality of our obstacle avoidance robot equipped with an ultrasonic sensor. Discover how it intelligently detects obstacles.

Introduction

This project of obstacle avoidance is for a robot that will move around and look for obstacles. It uses an ultrasonic sensor to measure the distance. If the distance is less than 20 cm, it will stop and look in both directions to see if it can move forward. If it can, it will turn left or right. If not, it will make a U-turn. The robot will also light up an LED display to show where it is going.

Logic

This code is making a robot move around and explore its surroundings. It has an ultrasonic sensor that can measure the distance between objects.

  1. First, it checks if the distance measured by the sensor is less than 20 cm.
  2. If it is, it draws a stop sign pattern on the LED display and makes the robot stop and look straight. Then it looks left and checks if the distance is greater than 40 cm. If it is, it draws a left arrow pattern on the LED display and makes the robot turn left.
  3. If not, it looks right and checks if the distance is greater than 40 cm. If it is, it draws a right arrow pattern on the LED display and makes the robot turn right.
  4. If not, it draws a U arrow pattern on the LED display and makes the robot make a U-turn.
  5. If the distance measured by the ultrasonic sensor is not less than 20 cm, the code will make the robot move forward.

Code

 

 



Upload the code to Quarky and test it.

Output

Read More
Learn how to use Machine Learning Environment to make a model that identifies the hand gestures and makes the Humanoid move accordingly.

This project demonstrates how to use Machine Learning Environment to make a machine–learning model that identifies hand gestures and makes the humanoid move accordingly.

We are going to use the Hand Classifier of the Machine Learning Environment. The model works by analyzing your hand position with the help of 21 data points.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as appropriate Coding Environment. you can click on “Machine Learning Environment” to open it.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: The name to which the class will be referred.
  2. Hand Pose Data: This data can be taken from the webcam or uploaded from local storage.

Note: You can add more classes to the projects using the Add Class button.
Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.
Training the Model

After data is added, it’s fit to be used in model training. To do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to predict previously unseen data.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Logic

The Humanoid will move according to the following logic:

  1. When the forward gesture is detected – the Humanoid will move forward.
  2. When the backward gesture is detected – the Humanoid will move backward.
  3. When the left gesture is detected – the Humanoid will turn left.
  4. When the right gesture is detected – the Humanoid will turn right.

Code

Logic

  1. First we Initialize Humanoid classes.
  2. Then, we open the recognition window, which will identify different poses, and turn on the camera with a certain level of transparency to identify images from the stage.
  3. If the identified class from the analyzed image is “forward,” the Humanoid will move forward at a specific speed.
  4. If the identified class is “backward,” the Humanoid will move backward using do () action () times at () speed block.
  5. If the identified class is “left,” the Humanoid will move left using do () action () times at () speed block.
  6. If the identified class is “right,” the Humanoid will move right using do () action () times at () speed block.
  7. Otherwise, the Humanoid will be in the home position.

Output

Read More
All articles loaded
No more articles to load
[PictoBloxExtension]