Table of Contents
[BlocksExtension]

touching color () ?

Description

The block checks whether its sprite is touching a specified color. If it is, the block returns “true”.

Example

Learn how to use face detection to control humanoid robot movements for interactive and responsive robotics applications. Get started now!

Introduction

One of the most fascinating activities is face tracking, in which the Quarky can detect a face and move its head in the same direction as yours. How intriguing it sounds, so let’s get started with the coding for a face-tracking robot.

Logic

  1. If the face is tracked at the center of the stage, the humanoid should be straight.
  2. As the face moves to the left side, the humanoid will also move to the left side.
  3. As the face moves to the right side, the humanoid will also move to the right side.

Code

sprite = Sprite('Tobi')
quarky=Quarky()
import time
import math
humanoid = Humanoid(7,2,6,3,8,1)

fd = FaceDetection()
fd.video("on", 0)
fd.enablebox()
fd.setthreshold(0.5)
time.sleep(1)
Angle=0
while True:
  fd.analysestage()
  for i in range(fd.count()):
    sprite.setx(fd.x(i + 1))
    sprite.sety(fd.y(i + 1))
    sprite.setsize(fd.width(i + 1))
    Angle=fd.width(i + 1)
    angle=int(float(Angle))
    if angle>90:
      humanoid.move("left",1000,3)
    elif angle<90:
      humanoid.move("right",1000,3)
      time.sleep(1)
    else:
      humanoid.home()

Code Explanation

  1. First, we import libraries and create objects for the robot.
  2. Next, we set up the camera and enable face detection with a 0.5 threshold.
  3. We use a loop to continuously analyze the camera feed for faces and control the humanoid’s movement based on this information.
  4. When a face is detected, the humanoid sprite moves to the face’s location, and the angle of the face is used to determine the direction of movement.
  5. If the angle is greater than 90 degrees, the humanoid moves to the left.if angle is less than 90 degrees, the humanoid moves to the right.if angle is  exactly 90 degrees, the humanoid returns to its original position.
  6. This code demonstrates how to use face detection to control the movement of a humanoid robot and how to incorporate external inputs into a program to create more interactive and responsive robotics applications.

Output

Read More
This project demonstrates how to use Machine Learning Environment to make a machine–learning model that identifies the hand gestures and makes the Mecanum robot move accordingly.

This project demonstrates how to use Machine Learning Environment to make a machinelearning model that identifies the hand gestures and makes the Mecanum move accordingly.

We are going to use the Hand Classifier of the Machine Learning Environment. The model works by analyzing your hand position with the help of 21 data points. We will add in total 8 different classes to operate the different motions of the Mecanum Robot with the help of the ML Environment of the Pictoblox Software.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as appropriate Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.
Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.
Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Logic

The mecanum will move according to the following logic:

  1. When the forward gesture is detected – Mecanum will move forward.
  2. When the backward gesture is detected – Mecanum will move backwards.
  3. When the Lateral Left gesture is detected – Mecanum will move towards the left direction laterally with the help of its omnidirectional wheels.
  4. When the Lateral Right gesture is detected – Mecanum will move towards the right direction laterally with the help of its omnidirectional wheels.
  5. When the Stop gesture is detected – Mecanum will stop moving.
  6. When the Normal Left gesture is detected – Mecanum will rotate in the left direction.
  7. When the Normal Right gesture is detected – Mecanum will rotate in the right direction.
  8. When the Circular Motion gesture is detected – Mecanum will move in a lateral arc.

Code

Initialization

Main Code

Output

Forward-Backward Motions:

Lateral Right-Left Motions:

Circular Right-Left Motions:

Lateral Arc Motion:

Read More
Learn how to use the Hand Gesture Classifier of the Machine Learning Environment to make a machine-learning model that identifies hand gestures and makes the Mecanum move accordingly.

This project demonstrates how to use Machine Learning Environment to make a machinelearning model that identifies the hand gestures and makes the Mecanum move accordingly.

We are going to use the Hand Classifier of the Machine Learning Environment. The model works by analyzing your hand position with the help of 21 data points. We will add in total 8 different classes to operate the different motions of the Mecanum Robot with the help of the ML Environment of the Pictoblox Software.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as appropriate Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.
Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.
Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Python Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Python Coding Environment if you have opened the ML Environment in the Python Coding.


Logic

The mecanum will move according to the following logic:

  1. When the forward gesture is detected – Mecanum will move forward.
  2. When the backward gesture is detected – Mecanum will move backwards.
  3. When the Lateral Left gesture is detected – Mecanum will move towards the left direction laterally with the help of its omnidirectional wheels.
  4. When the Lateral Right gesture is detected – Mecanum will move towards the right direction laterally with the help of its omnidirectional wheels.
  5. When the Stop gesture is detected – Mecanum will stop moving.
  6. When the Normal Left gesture is detected – Mecanum will rotate in the left direction.
  7. When the Normal Right gesture is detected – Mecanum will rotate in the right direction.
  8. When the Circular Motion gesture is detected – Mecanum will move in a lateral arc.

Code

The following code appears in the Python Editor of the selected sprite.

####################imports####################
# Do not change

import numpy as np
import tensorflow as tf
import time

# Do not change
####################imports####################

#Following are the model and video capture configurations
# Do not change

model=tf.keras.models.load_model(
    "num_model.h5",
    custom_objects=None,
    compile=True,
    options=None)
pose = Posenet()                                                    # Initializing Posenet
pose.enablebox()                                                    # Enabling video capture box
pose.video("on",0)                                                  # Taking video input
class_list=['Forward','Backward','Stop','LateralRight','LateralLeft','NormalRight','NormalLeft','CircularMotion']                  # List of all the classes
meca=Mecanum(1,2,7,8)
def runmecanum(predicted_class):
  if pose.ishanddetected():
    if predicted_class=="Forward":
      meca.runtimedrobot("forward",100,2)
    if predicted_class=="Backward":
      meca.runtimedrobot("backward",100,2)
    if predicted_class=="Stop":
      meca.stoprobot()
    if predicted_class=="LateralRight":
      meca.runtimedrobot("lateral right",100,2)
    if predicted_class=="LateralLeft":
      meca.runtimedrobot("lateral left",100,2)
    if predicted_class=="NormalRight":
      meca.runtimedrobot("circular right",100,1)
    if predicted_class=="NormalLeft":
      meca.runtimedrobot("circular left",100,1)
    if predicted_class=="CircularMotion":
      meca.runtimedrobot("lateral arc",100,1)
    
# Do not change
###############################################

#This is the while loop block, computations happen here
# Do not change

while True:
  pose.analysehand()                                             # Using Posenet to analyse hand pose
  coordinate_xy=[]
    
    # for loop to iterate through 21 points of recognition
  for i in range(21):
    if(pose.gethandposition(1,i,0)!="NULL"  or pose.gethandposition(2,i,0)!="NULL"):
      coordinate_xy.append(int(240+float(pose.gethandposition(1,i,0))))
      coordinate_xy.append(int(180-float(pose.gethandposition(2,i,0))))
    else:
      coordinate_xy.append(0)
      coordinate_xy.append(0)
            
  coordinate_xy_tensor = tf.expand_dims(coordinate_xy, 0)        # Expanding the dimension of the coordinate list
  predict=model.predict(coordinate_xy_tensor)                    # Making an initial prediction using the model
  predict_index=np.argmax(predict[0], axis=0)                    # Generating index out of the prediction
  predicted_class=class_list[predict_index]                      # Tallying the index with class list
  print(predicted_class)
  runmecanum(predicted_class)
  # Do not change

Logical Code

def runmecanum(predicted_class):
  if pose.ishanddetected():
    if predicted_class=="Forward":
      meca.runtimedrobot("forward",100,2)
    if predicted_class=="Backward":
      meca.runtimedrobot("backward",100,2)
    if predicted_class=="Stop":
      meca.stoprobot()
    if predicted_class=="LateralRight":
      meca.runtimedrobot("lateral right",100,2)
    if predicted_class=="LateralLeft":
      meca.runtimedrobot("lateral left",100,2)
    if predicted_class=="NormalRight":
      meca.runtimedrobot("circular right",100,1)
    if predicted_class=="NormalLeft":
      meca.runtimedrobot("circular left",100,1)
    if predicted_class=="CircularMotion":
      meca.runtimedrobot("lateral arc",100,1)

Output

Forward-Backward Motions:

Lateral Right-Left Motions:

Circular Right-Left Motions:

Lateral Arc Motion:

Read More
Learn about face-tracking, and how to code a face-tracking Quadruped robot using sensors and computer vision techniques.

Introduction

A face-tracking robot is a type of robot that uses sensors and algorithms to detect and track human faces in real-time. The robot’s sensors, such as cameras or infrared sensors, capture images or videos of the surrounding environment and use computer vision techniques to analyze the data and identify human faces.
Face-tracking robots have many potential applications, including in security systems, entertainment, and personal robotics. For example, a face-tracking robot could be used in a museum or amusement park to interact with visitors, or in a home as a companion robot that can recognize and follow the faces of family members.

One of the most fascinating activities is face tracking, in which the Quadruped can detect a face and move its head in the same direction as yours. How intriguing it sounds, so let’s get started with the coding for a face-tracking Quadruped robot.

Logic

  1. If the face is tracked at the center of the stage, the Quadruped should be straight.
  2. As the face moves to the left side, the Quadruped will also move to the left side.
  3. As the face moves to the right side, the Quadruped will also move to the right side.

Code Explain

  1. Drag and drop the when green flag clicked block from the Events palette.
  2. Then, add a turn () video on stage with () % transparency block from the Face Detection extension and select one from the drop-down. This will turn on the camera.
  3. Add the set pins FR Hip () FL Hip () FR Leg () FL Leg() BR Hip () BL Hip () BR Leg () BL Leg () block from the Humanoid extension.
  4. Click on the green flag and your camera should start. Make sure this part is working before moving further.
  5. Add the forever block below turn () video on stage with () % transparency from the Control palette.
  6. Inside the forever block, add an analyzed image from the () block. This block will analyze the face the camera detects. Select the camera from the dropdown.
  7. Create a variable called Angle that will track the angle of the face. Based on the angle, the robot will move to adjust its position.
  8. Here comes the logical part as in this, the position of the face on the stage matters a lot. Keeping that in mind, we will add the division () / () block from the Operator palette into the scripting area.
  9. Place get () of the face () at the first place of addition () + (), and 3 at the second place. From the dropdown select X position.
  10. If the angle value is greater than 90, the Humanoid will move left at a specific speed. If the angle is less than 90, the Humanoid will move right at a specific speed. If the angle is exactly 90, the Humanoid will return to its home position.
Block Explained

  1. Create a variable called Angle and assign it the value of the face’s position.
  2. At the center of the stage, we will get the X position value which is zero.
  3. As we move to the left side the X position value will give you the negative value and as we move to the right side the X position value will give you the positive value.
  4. The x position value is divided by 3 which gives precise positioning.
  5. To set the angle at 90 when the face is at the center of the stage we have added 90 to the X position value.
  6. As we move to the left side the angle value will get decreased as the X position value is going in negative.
  7. As we move to the right side the angle value will get increased as the X position value is going in positive.

Code

Output

Our next step is to check whether it is working right or not. Whenever your face will come in front of the camera, it should detect it and as you move to the right or left, the head of your  Quadruped robot should also move accordingly.

Read More
Convert any word or phrase into a delightful sequence of emojis with our Emoji Converter.

Introduction

Are you looking to add some fun and expressiveness to your conversations? Look no further! I’m here to help you convert any word or phrase into a colorful array of emojis. Whether you want to spice up your messages, or social media posts, or simply bring a smile to someone’s face, I’ve got you covered.

Just type in the word or phrase you want to transform, and I’ll generate a delightful sequence of emojis that capture the essence of your text. Emojis are a universal language that transcends words from happy faces to animals, objects, and everything in between.

So, let’s get started and infuse your text with a touch of emoji magic! 🎉🔥

Logic

This code allows the user to interact with the sprite and provide emojis, which are then transformed into a response using the ChatGPT model. The sprite then speaks the generated response using the provided emojis.

  1. Open PictoBlox and create a new file.
  2. Choose a suitable coding environment for Block-based coding.
  3. Define a sprite , Tobi.
  4. Then, we create an instance of the ChatGPT model using the ChatGPT class.
  5. The sprite, named Tobi, asks for a world that can be converted into emojis by using the command sprite.input(“Please provide a world that I can convert into emojis”).
  6. After receiving the input, Tobi uses the answer() function to generate a response based on the provided world.
  7. Next, the language model, ChatGPT, is involved. It has a function called movieToemoji() that takes the generated response from Tobi and performs some operation related to converting a movie into emojis.
  8. Finally, the result of the operation performed by ChatGPT is stored in the variable result. Tobi then uses the command sprite.say(result, 5) to display the result for 5 seconds.
  9. In summary, the code represents a scenario where Tobi the sprite asks for a world, ChatGPT processes the input and performs some operation related to movies and emojis, and Tobi displays the result.

Code

sprite = Sprite('Tobi')
gpt = ChatGPT()


sprite.input("Please provide a world that i can convert into an emojis")
answer=sprite.answer()

gpt.movieToemoji(answer)
result=gpt.chatGPTresult()
sprite.say(result,5)

Output

Read More
Welcome to the Noun Detector, a powerful tool that utilizes ChatGPT and the spaCy library to identify and extract nouns from text.

Introduction

Welcome to the Noun Detector! This powerful tool utilizes the capabilities of ChatGPT and leverages the spaCy library to identify and extract nouns from text. By employing advanced natural language processing techniques, the Noun Detector analyzes sentences and highlights the essential elements that represent people, places, objects, or concepts.

Noun Detector is designed to excel at identifying and extracting nouns from text. Experience the Noun Detector’s capabilities firsthand and unlock the power of noun extraction in your language-processing endeavors. Try it out and witness the precision and efficiency of this invaluable tool!

Code

sprite = Sprite('Tobi')

quarky=Quarky()
gpt = ChatGPT()

gpt.askOnChatGPT("AIAssistant", "Genrate simple random sentence for me")
result=gpt.chatGPTresult()
gpt.getgrammerfromtext("GrammerNoun",result)
noun=gpt.chatGPTresult()

sprite.say(result,5)
print(result)
print(noun)


sprite.input("Indentify and write the noun in sentance")
answer= str(sprite.answer())

if the answer in noun:
  sprite.say("You have a strong understanding of noun concepts. Well done!",5)
else:
  sprite.say("Please check the terminal for the correct answer as your response is incorrect",5)

Logic

  1. Open PictoBlox and create a new file.
  2. Choose a suitable coding environment for block-based coding.
  3. We have a sprite character named Tobi.
  4. Add the ChatGPT extensions to your project from the extension palette located at the bottom right corner of PictoBlox.
  5. We will ask the AI assistant to create a random sentence for us.
  6. The AI assistant will generate the sentence and identify the nouns in it.
  7. Tobi will then say the generated sentence out loud for 5 seconds. The sentence and the identified nouns will be displayed on the screen.
  8. Next, Tobi will ask you to identify and write the noun in the sentence. You need to type your answer.
  9. If your answer matches any of the identified nouns, Tobi will appreciate you.
  10. But if your answer is incorrect, Tobi will say to check the terminal.
  11. So, give it a try and see if you can identify the noun correctly!

Output

Read More
Expand your vocabulary and enhance your writing with the Synonyms and Antonyms Word Converter.

Introduction

The Synonyms and Antonyms Word Converter is a powerful tool powered by the ChatGPT extension that allows users to effortlessly find synonyms and antonyms for words. It harnesses the capabilities of the advanced language model to provide accurate and contextually relevant word alternatives.

With the Synonyms and Antonyms Word Converter, you can expand your vocabulary, enhance your writing, and improve your communication skills. Whether you’re a writer seeking more expressive language or a student looking to diversify your word choices, this tool is designed to assist you in finding suitable alternatives quickly and easily.

Using the ChatGPT extension, the Synonyms and Antonyms Word Converter engage in interactive conversations, making it an intuitive and user-friendly tool. By providing a word as input, you can receive a list of synonyms or antonyms, depending on your preference, helping you to diversify your language and convey your ideas with precision.

Code

sprite = Sprite('Tobi')
gpt = ChatGPT()
str2 = ""
var2=""

sprite.input("Please provide a word for which you would like to find synonyms and antonyms")
answer= str(sprite.answer())

gpt.getsynonymsAntonymsfromText("Synonyms",answer)
str1=gpt.chatGPTresult()


for i in str1:
    if not i.isdigit():
        str2 += i
        
print("Synonyms words are:", str2)

gpt.getsynonymsAntonymsfromText("Antonyms",answer)
var1=gpt.chatGPTresult()

for j in var1:
    if not j.isdigit():
        var2 += j
        
print("Antonyms words are:", var2)

Logic

  1. Open PictoBlox and select the environment as appropriate Python Coding Environment.
  2. Create a new file.
  3. Select the environment as appropriate Python Coding Environment.
  4. First, an instance of the Sprite class is created, with the name “Tobi”.
  5. To add the ChatGPT extension, click on the extension button located as shown in the image. This will enable the ChatGPT extension, allowing you to incorporate its capabilities into your project.
  6. Two empty strings, str2 and var2, are declared to store the resulting synonyms and antonyms, respectively.
  7. The user is prompted to provide a word for which they want to find synonyms and antonyms using the input() method from the Sprite library.
  8. The user’s input is stored in the answer variable as a string.
  9. The getsynonymsAntonymsfromText() method is called on the gpt object to find synonyms for the provided word. The category “Synonyms” is specified.
  10. The resulting synonyms are obtained from gpt.chatGPTresult() and stored in the str1 variable.
  11. The code then iterates over each character in str1 and appends non-digit characters to str2, filtering out any numerical values.
  12. Finally, the code prints the extracted synonyms stored in str2.
  13. The process is repeated for finding antonyms, where the getsynonymsAntonymsfromText() method is called with the category “Antonyms“, and the resulting antonyms are stored in the var1 variable.
  14. Non-digit characters are extracted and stored in var2, which contains the antonyms.
  15. The code concludes by printing the extracted antonyms stored in var2.
  16. Press Run to run the code.
  17. Sprite Tobi asks for the word you want synonyms/antonyms for.
  18. Go to the terminal. the terminal will display the synonyms and antonyms for a word.

Output

Read More
Discover how a robotic arm playing chess showcases the synergy between robots and AI.

Introduction

robotic arm playing chess is a great example of how robots and AI can work together to do complex tasks. Chess is a game that needs smart thinking and careful moves. The robotic arm is like a human arm and can move pieces on the chessboard.

The robotic arm has different parts like joints, actuators, sensors, and a gripper. The joints let the arm move in different ways, just like a human arm. The actuators control the arm’s movements, so it can make precise and planned moves during the game.

The robotic arm uses AI and computer vision to play chess. The AI algorithms study the chessboard, figure out where the pieces are, and decide on the best moves. They consider things like how valuable each piece is and where they are on the board. The arm’s sensors tell it where it is, so it can pick up the pieces and put them in the right places accurately.

When the AI finds the best move, the robotic arm carefully grabs the chosen piece, lifts it up, and puts it on the right square of the chessboard. The gripper has sensors to handle the pieces gently and not damage them.

The robotic arm playing chess is an amazing example of how robots, AI, and computer vision can work together. It shows how we can use complex algorithms and physical abilities to do tasks that people usually do. This technology can be useful in many fields like manufacturing, logistics, and healthcare, where we need precise and automated movements.

In summary, a robotic arm playing chess is a cool combination of robotics, AI, and computer vision. It can make smart and accurate moves on a chessboard. It’s a big achievement in robotics and shows how automation and AI can do complex tasks in different industries.

Code

Logic

  1. Drag and drop the Set Pins link1() link2() base () gripper() block to adjust all the pins to their correct angles.
  2. Set the orientation along the Z-axis by using a value of -10 in the downward direction using set offset along length() & Z() block.
  3. Set the gripper to open and close at the appropriate angle using () gripper block.
  4. Set the arm to its home position using home() block.
  5. Open the gripper using (open) gripper block.
  6. Move the arm to a specific direction and point using move() in() axis in ()ms block.
  7. Close the gripper.
  8. The arm will pick up the chess piece and place it in a specific location while following the rules of chess.
  9. Press the “Run button to execute the code.

Output


 

Read More
Discover the versatility and benefits of automatic robotic arms in various industries.

Introduction

An automatic robotic arm is a mechanical device that imitates a human arm. It can be programmed and used in many industries. The arm consists of linked parts that can move and rotate, enabling it to do various tasks. Technology advancements like AI and machine learning have led to more advanced robotic arms. These arms can adapt and work autonomously. As a result, they are now widely used across industries and play a crucial role in automation systems.

Code

Logic

  1. Open the Pictoblox application. Select the block-based environment.
  2. Click on the robotic arm extension available in the left corner.
  3. Start by setting up pins for four different connections using set pin lis.
  4. Define the open angle () and close angle () for the gripper.
  5. Establish the home position for the gripper. Use a forever loop to continuously run the loop by dragging and dropping it.
  6. Then open () gripper. Then move position of robotic arm. Change the x, y, and z axes individually at specific intervals.
  7. Utilize the move along the X(), Y(), and Z() axis in ms block to move the arm.
  8. the gripper. Use the goto()  block to change the arm’s position along a specific axis.
  9. Open the gripper. Return the arm to the home position. Close the gripper.
  10. Adjust the position of the robotic arm by moving it along the X(), Y(), and Z() axes one by one using move along the X() Y() Z() axis one by one in ()ms.
  11. We use go to () in () axis in () ms block to change position of arm.
  12. Open the gripper then return home() position then close the gripper().
  13. Add interval of 0.2 second.
  14. Press Run to run the code.

Output

Read More
Discover the power of Pulse Width Modulation (PWM) in Arduino, enabling voltage control and pulse width adjustment.

Introduction

PWM and It’s Applications

Pulse Width Modulation (PWM) is a powerful signal that allows precise control over voltage values and the time for each voltage level. By selectively choosing voltage values and durations, PWM signals can be fine-tuned to meet specific requirements. In PWM signals, the time lengths for the LOW and HIGH states of the pulse can vary, as depicted in the figure below. PWM has various applications and is commonly used to control LEDs and DC motors.

Applications
  1. LED Control: PWM controls the frequency of light emitted by LEDs, making them appear ON and OFF at a frequency detectable by our eyes.
  2. DC Motor Control: In DC motors, PWM acts as a pulse train, delivering high or low electrical power based on the width of the PWM pulses.
PWM pins in Arduino 

Arduino boards have 14 digital input/output pins, six of which can be utilized as PWM outputs (indicated with a dash on the board). The Digital-to-Analog Converter (DAC) channel of Arduino is an 8-bit register, enabling voltage values ranging from 0 to 255 (corresponding to 0V and 5V, respectively).


Understanding Analog Output

Analog signals have variable magnitudes throughout their cycles. Examples of analog signals include the output of smoke sensors and soil moisture sensors. In Arduino, PWM pins can be used to generate analog signals as output.

Circuit Diagram

 

  • LED+ to pin 3
  • LED – to GND

Code

  1. Select Arduino Uno from the boards and connect it via USB.
  2. Create a variable “brightness” and set its initial value to 0.
  3. Use the “repeat until ()” block from the Control palette.

  4. Check the voltage range for PWM pins (0-255) using a comparison operator.
  5. Increase the value of “brightness” by +5 inside the “repeat until ()” block.

  6. Use the “set PWM pin() output as()” block from the Arduino palette to set pin 3 as a PWM output and adjust its voltage level using the “brightness” variable.
  7. Place this block inside the “repeat until ()” block.
  8. Set the PWM value to brightness and  change the brightness by +5.
  9. Again add a repeat until () block and check if the brightness is greater the 0
  10. Now change the brightness by -5 and add give this value to PWM pin 3
  11. Merge all the code blocks and insert them inside the “forever” block. Also, add the “when flag clicked” event.
Script:

Output: GIF Arduino
Read More
Learn about servo motors and interface it with Arduino , servo control with ardunio

Introduction

Understanding Servo Motors and it’s working

Servo motors are part of a closed-loop system and are comprised of several parts namely a control circuit, servo motor, shaft, potentiometer, drive gears, amplifier and either an encoder or resolver. A servo motor is a self-contained electrical device, that rotate parts of a machine with high efficiency and with great precision. The output shaft of this motor can be moved to a particular angle, position and velocity that a regular motor does not have.

circuit

 

Code

  • From event palette drag when flag clicked block
  • From arduino palette drag set servo on() to ()
  •  Add forever block and insert if else block in forever block.
  • Read the status of IR senor on pin 7 of ardino using read status of digital pin()
  • If the sensor value is HIGH  then servo must rotate 180 for 3  seconds otherwise it will remain at 90 degree

Script:

Output

 

 

 

 

Read More
The examples show how to use pose recognition in PictoBlox to maintain a yoga pose for a particular time interval.

Script

The idea is simple, we’ll add one image of  each class in the “costume” column by making one new sprite which will we display on the stage according to input from user. we’ll also change name of the image according to pose.

  1. Add testing images to the backdrop and delete the default backdrop.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. We made the new variable “count” by choosing the “Make a Variable” option from the Variables palette.
  5. Add the “hide variable () block from the Variables palette. Select count.
  6. Add the “turn () video on stage with () transparency” block from the Machine Learning palette. Select the off option at the first empty place, and for the second, write a 0 value.
  7. Add an “ask () and wait” block from the Sensing palette. Write an appropriate statement in an empty place.
  8. Add the “if () then” block from the control palette for checking the user’s input.
  9. In the empty place of the “if () then” block, add a condition checking block from the operators palette block. At the first empty place, put the answer block from the sensing palette, and at the second place, write an appropriate statement.
  10. Inside the “if () then” block, add a “broadcast ()” block from the Events palette block. Select the “New message” option and write an appropriate statement for broadcasting a message to another sprite.
  11. Add the “turn () video on stage with () transparency” block from the Machine Learning palette. Select the option at the first empty place, and for the second, write a 0 value.
  12. Add the “() key points” block from the Machine Learning palette. Select the show option.
  13. Add the “Set () to ()” block from the Variables palette. Select the count option at the first empty place, and for the second, write a 30 value.
  14. Add the Show variable () block from the Variables palette. Select count.
  15. Add “forever” from the Control palette.
  16. Inside the “forever” block, add an “analysis image from ()” block from the Machine Learning palette. Select the Web camera option.
  17. Inside the “forever” block, add an “if () then” block from the Control palette.
  18. In the empty place of the “if () then” block, add an “is identified class ()” block from the Machine Learning palette. Select the appropriate class from the options.
  19. Inside the “if () then” block, add an “say ()” block from the Looks palette block. Write an appropriate statement in an empty place.
  20. Add “change () by ()” from the Variables palette. Select the count option in the first empty place, and for the second, write a -1 value.

  21. Add the “if () then” block from the control palette for checking the user’s input.
  22. In the empty place of the “if () then” block, add a condition checking block from the operators palette block. In the first empty place, put the “count” block from the sensing palette, and in the second place, write 0.
  23. Add the “Set () to ()” block from the Variables palette. Select the count option at the first empty place, and for the second, write a 30 value.
  24. Add the “turn () video on stage with () transparency” block from the Machine Learning palette. Select the off option at the first empty place, and for the second, write a 0 value.
  25. Inside the “if () then” block, add an “say ()” block from the Looks palette block. Write an appropriate statement in an empty place.
  26. Add the “() key points” block from the Machine Learning palette. Select the hide option
  27. Add the “stop ()” block to the control pallet. Select all options.
  28. Repeat “if () then” block code for other classes, make appropriate changes in copying block code according to other classes, and add code just below it.
  29. The final block code looks like
  30. Now click on another sprite and write code.
  31. We’ll start writing code for this sprite by adding a when flag is clicked block from the Events palette.
  32. Add the “hide” block from the Looks pallet.
  33. Write a new code in the same sprite according to class and add the “when I receive ()” block from the Events palette. Select the appropriate class from the options.
  34. Add the “show” block from the Looks pallet.
  35. Add the “switch costume to ()” block from the Looks palette. Select the appropriate class from the options.
  36. Repeat the same code for other classes and make changes according to the class.

    Final Result

Read More
Learn how to interface an MQ sensor with Quarky to detect the presence of gases like alcohol.

Introduction

The MQ series of sensors comprises a range of gas detectors used to detect multiple gases like CO2, LPG, CO, and more. These sensors find applications in various scenarios, from detecting fire-induced smoke in buildings to detecting gas leaks, making them crucial for mining and other industries.

Buy MQ-137 NH3 Gas Sensor Module Online in India | Robocraze

In this example, we will interface an MQ sensor with Quarky to specifically detect the presence of alcohol. Our objective is to detect alcohol levels and trigger an alarm if they exceed a certain limit. Let’s embark on this exciting journey of gas detection with Quarky!

Circuit Diagram:

Code:

  1. Create the circuit as per the provided circuit diagram.
  2. Open Pictoblox and create a new file.
  3. Select Quarky from the Board menu.
  4. Drag the “if-then-else” block from the controls palette into the scripting area.
  5. From the operator palette, add the “greater than” operator into the “if” conditional part.
  6. Insert the “read analog sensor () at pin()” block from sensor palette Quarky into the space provided in the “greater than” operator block.
  7. As the sensor is connected to A1, set the condition to check if the value generated by the sensor is greater than 50. If it is, it indicates that the gas (alcohol) is above the desired level, and Quarky should trigger the LED connected to pin D1. Otherwise, the LED should remain off.
  8. Drag the “forever” block from the controls palette into the scripting area and place the above code block inside the “forever” block.
  9. Now your script is complete. Add an event to start the script by dragging the “when flag clicked” block from the events palette to the beginning of the script.

 

Script

 

Output

With this exciting project, you have learned how to interface an MQ sensor with Quarky to detect alcohol gas levels. Explore the diverse applications of MQ sensors, from detecting smoke in buildings to monitoring gas leaks in industrial settings. Create your own gas detection system with Quarky, and unleash the potential of gas sensing technology! Keep experimenting, and the world of robotics and AI will become your playground!

 

Output gifs are need to be updated.

 

 

Read More
This project demonstrates a Two IR Line Following Robot using external IR sensors and the Quarky.

Steps: 

  1. Connect the External IR module analog pin with the quarky analog pin and select the pin on the block
  2. Set IR threshold for line following and stopping the robot at the crossing lines
  3. Use read analog pin block for reading IR value to get IR threshold
  4. The below example uses without-PID line following blocks
  5. When you click doline following robot start line following and stop at the check-point (when both IRs are at the Black line)

Script

 

Read More
In this example, we look at how to establish and see if the Wi-Fi is connected to Quarky or not.

In this example, we look at how to establish and see if the Wi-Fi is connected to Quarky or not.

Code

The following code can be used for it:

The following code is generated by PictoBlox:

# This python code is generated by PictoBlox

from quarky import *
# This python code is generated by PictoBlox

# imported modules
import iot

wifi = iot.wifi()


wifi.connecttowifi("IoT","12345678")
while True:
	if wifi.iswificonnected():
		quarky.drawpattern("ccccccccccccccccccccccccccccccccccc")
	
	else:
		pass
		quarky.drawpattern("bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb")

Output

Troubleshooting

  1. If the Green Light comes, your Wi-Fi is connected.
  2. If the Red Light comes, your Wi-Fi is not connected. Change the Wi-Fi and the password and try again.
  3. If the Red Cross sign comes, Quarky, Python error has occurred. Check the serial monitor and try to reset the Quarky.
Read More
Explore the power of machine learning in recognizing hand gestures and controlling the movements of a Quadruped robot.

Introduction

This project demonstrates how to use Machine Learning Environment to make a machinelearning model that identifies the hand gestures and makes the Quadruped move accordingly. learning model that identifies the hand gestures and makes the qudruped move accordingly.

We are going to use the Hand Classifier of the Machine Learning Environment. The model works by analyzing your hand position with the help of 21 data points.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. You can click on “Machine Learning Environment” to open it.
  3. Click on “Create New Project“.
  4. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  5. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.
Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.

We are going to use the Hand Classifier of the Machine Learning Environment.

 

Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

 

Hand Pose Classifier

The model will return the probability of the input belonging to the classes.You will have the following output coming from the model.

Logic

The Quadruped will move according to the following logic:

  1. When the forward gesture is detected – Quadruped will move forward.
  2. When the backward gesture is detected – Quadruped will move backward.
  3. When the left gesture is detected – Quadruped will turn left.
  4. When the right gesture is detected – Quadruped will turn right.

Code

Output

Read More
Learn how to create a switch on Adafruit IO with Python code and an IoT-enabled Smart Plug. This project demonstrates how to control a plug and retrieve information from the cloud with the help of a Quarky Expansion Board, Adafruit IO, and an IoT house.

The project demonstrates how to create a smart plug that can be controlled by an IoT device and that can retrieve information from the cloud. The smart plug can be used to turn lights ON and OFF.

Creating a Switch on Adafruit IO

We will be using Adafruit IO for creating a switch on the cloud. Follow the instructions:

  1. Create a new Feed named Light.
  2. Create a new Dashboard named Light Control.
  3. Edit the Dashboard and add a Toggle Block.
  4. Connect the Light feed to the block and click on Next Step.
  5. Edit the Block Setting and click on Create Block.
  6. Block is added. You can try to toggle the switch.
  7. Go to the Light feed. You will observe the value of the feed changing as we click on the switch on the Dashboard.

Adafruit IO Key

You can find the information about your account once you log in from here:

Note: Make sure you are login on Adafruit IO: https://io.adafruit.com/

Circuit For the Lamp

The bulb is connected to the smart plug which is controlled with a relay.

Note:  A relay is an electromechanical switch that is used to turn on or turn off a circuit by using a small amount of power to operate an electromagnet to open or close a switch.

If the relay is ON, the smart switch gets ON, turning on the light. The relay has the following connections:

  1. GND Pin connected to GND of the Quarky Expansion Board.
  2. VCC Pin connected to VCC of the Quarky Expansion Board.
  3. Signal Pin connected to Servo 4 of the Quarky Expansion Board.

Python Code for Stage Mode

This Python code connects to Adafruit IO using the given credentials and checks if the light is ON or OFF. If the light is ON, then the code sets the relay to 0 (ON) and sets the LED to white with a brightness of 100. If the light is OFF, then the code sets the relay to 1 (OFF) and sets the LED to black with a brightness of 100. The code runs in an infinite loop to continuously check the status of the light.

#This code is used to control the light in an IOT house. 
#The code connects to Adafruit IO using the given credentials and checks if the light is "ON" or "OFF". 
#If the light is "ON" then the code sets the relay to 0 (ON) and sets the LED to white with a brightness of 100. 
#If the light is "OFF" then the code sets the relay to 1 (OFF) and sets the LED to black with a brightness of 100. 

quarky = Quarky() #Creating an object for the Quarky class

adaio = AdaIO() #Creating an object for the AdaIO class
house = IoTHouse() #Creating an object for the IoTHouse class

adaio.connecttoadafruitio("STEMNerd", "aio_UZBB56f7VTIDWyIyHX1BCEO1kWEd") # Connecting to Adafruit IO using the given credentials 

while True: # Loop that runs forever
  if (adaio.getdata("Light") == "ON"): #Checking if the light is "ON"
    house.setrelay(0, "pwm4") # Setting the relay to 0
    quarky.setled(1, 1, [255, 255, 255], 100) #Setting the LED to white with a brightness of 100

  if (adaio.getdata("Light") == "OFF"): #Checking if the light is "OFF"
    house.setrelay(1, "pwm4") # Setting the relay to 1
    quarky.setled(1, 1, [0, 0, 0], 100) #Setting the LED to black with a brightness of 100

 

Output

IoT-Enabled Smart Plug Upload Mode

You can also make the IoT Enabled Smart Plug work independently of PictoBlox using the Upload Mode. For that switch to upload mode and replace the when green flag clicked block with when Quarky starts up the block.

This code connects to a wifi network and an Adafruit IO account, creates an object for the IoTHouse class, and then sets a relay and LED based on the data from the Adafruit IO account. The loop runs forever and will always check if the wifi is connected and if the light isON orOFF“. If the wifi is not connected, it will set the LED to red.

from quarky import *

# Connect to a wifi network
import iot
import iothouse
wifi = iot.wifi()
wifi.connecttowifi("IoT", "12345678")

# Connect to an adafruit IO account
adaio = iot.AdaIO()
adaio.connecttoadafruitio("STEMNerd", "aio_UZBB56f7VTIDWyIyHX1BCEO1kWEd")

#Creating an object for the IoTHouse class
house = iothouse.iothouse()

while True:  # Loop that runs forever
  # Check if the wifi is connected
  if wifi.iswificonnected():
    if (adaio.getdata("Light") == "ON"):  #Checking if the light is "ON"
      house.setrelay(0, "pwm4")  # Setting the relay to 0
      quarky.setled(1, 1, [255, 255, 255], 100)  #Setting the LED to white with a brightness of 100

    if (adaio.getdata("Light") == "OFF"):  #Checking if the light is "OFF"
      house.setrelay(1, "pwm4")  # Setting the relay to 1
      quarky.setled(1, 1, [0, 0, 0], 100)  #Setting the LED to black with a brightness of 100

  else:
    # Set LED 1 to red
    quarky.setled(1, 1, [255, 0, 0], 100)

 

Read More
Learn how to create a crawling motion with a quadruped robot using individual servo control.

Introduction

The project demonstrates how to make the crawling motion with Quadruped using individual servo control.

Logic

For this project, we are using the set servos () () () () () () () () at () speed block that sets the servo motors of the quadruped to the specified angles at the specified speed.

There are four positions of the robot we are going to make to create the crawling motion:

  1. Position 1


  2. Position 2
  3. Position 3
     

  4. Position 4

Code

Output

Read More
Explore the surroundings with our obstacle avoidance Mars Rover that uses an ultrasonic sensor to detect and avoid obstacles. Learn how the robot moves, detects obstacles, and navigates its way through them.

This project of obstacle avoidance is for a robot that will move around and look for obstacles. It uses an ultrasonic sensor to measure the distance. If the distance is less than 20 cm, it will stop and look in both directions to see if it can move forward. If it can, it will turn left or right. If not, it will make a U-turn.

Logic

  1. This code is making a robot move around and explore its surroundings. It has an ultrasonic sensor that can measure the distance between objects.
  2. We will first initialize the servos of the Mars Rover with the block “Set head pins()”.
  3. Then we will make all the servos rotate to 90 degrees if they are not initialized.
  4. Thereafter we will initialize the ultrasonic sensors and define the minimum and maximum distance variables.
  5. The main logic of the code is that it first checks whether the distance is less than the minimum distance. If it is, the head servo will move to 45 degrees and check again if the distance is greater than the maximum distance, hence moving in the right direction.
  6. The robot with the help of the head servo, will check the distance for the conditions 90 degrees, 45 degrees, 135 degrees, 0 degrees and 180 degrees in the same order as stated.
  7. Whenever the distance measured will be less than minimum distance the head servo will change the direction to the next set of degree to check distance.
  8. In the last case scenario where all the angles contain obstacles as such, in that case the robot will change its direction to reverse by rotating to 180 degrees. By this way the robot will be able to navigate its own way through each and every obstacles.

Code:

Main Functions:

 

Final Main Logic:

Output

 

Read More
Learn how to set the bounding box threshold, and detect signals such as 'Go', 'TurnRight', 'TurnLeft', and 'Stop' to control humanoid movements.

Intorduction

A sign detector humanoid robot is a robot that can recognize and interpret certain signs or signals, such as hand gestures or verbal commands, given by a human. The robot uses sensors, cameras, and machine learning algorithms to detect and understand the sign, and then performs a corresponding action based on the signal detected.

These robots are often used in manufacturing, healthcare, and customer service industries to assist with tasks that require human-like interaction and decision making.

Code

sprite = Sprite('Tobi')
quarky = Quarky()


import time
humanoid = Humanoid(7, 2, 6, 3, 8, 1)
recocards = RecognitionCards()
recocards.video("on flipped")
recocards.enablebox()
recocards.setthreshold(0.6)

while True:
  recocards.analysecamera()
  sign = recocards.classname()
  sprite.say(sign + ' detected')
  if recocards.count() > 0:
    if 'Go' in sign:
      humanoid.move("forward", 1000, 1)
      
    if 'Turn Left' in sign:
      humanoid.move("backward", 1000, 1)
      
    if 'Turn Right' in sign:
      humanoid.move("left", 1000, 1)
      
    if 'U Turn' in sign:
      humanoid.move("backward", 1000, 1)
      
      
    

Logic

  1. Then, it sets up the robot’s camera to look for hand signs, and tells it how to recognize different signs.
  2. Next, the code starts a loop where the robot looks for hand signs. If it sees a sign, it says the name of the sign out loud.
  3. Finally, if the robot sees certain signs (like ‘Go’, ‘Turn Left’, ‘Turn Right’, or ‘U Turn’), it moves in a certain direction (forward, backward, left, or backward) based on the sign it sees.
  4. So, this code helps a robot understand hand signs and move in response to them!

Output

Read More
Learn how to use the Hand Gesture Classifier of the Machine Learning Environment to make a machine-learning model that identifies hand gestures and makes the Mars Rover move accordingly.

This project demonstrates how to use Machine Learning Environment to make a machine–learning model that identifies hand gestures and makes the Mars Rover move accordingly.

We are going to use the Hand Classifier of the Machine Learning Environment. The model works by analyzing your hand position with the help of 21 data points.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as appropriate Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.
Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.
Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Python Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Python Coding Environment if you have opened the ML Environment in Python Coding.

Code

The following code appears in the Python Editor of the selected sprite.

####################imports####################
# Do not change

import numpy as np
import tensorflow as tf
import time
sprite=Sprite('Tobi')
import time
quarky = Quarky()
rover = MarsRover(4, 1, 7, 2, 6)
# Do not change
####################imports####################

#Following are the model and video capture configurations
# Do not change

model=tf.keras.models.load_model(
    "num_model.h5",
    custom_objects=None,
    compile=True,
    options=None)
pose = Posenet()                                                    # Initializing Posenet
pose.enablebox()                                                    # Enabling video capture box
pose.video("on",0)                                                  # Taking video input
class_list=['forward','backward','left','right','stop']                  # List of all the classes
def runQuarky(predicted_class):
    if pose.ishanddetected():
      if predicted_class == "forward":
        rover.home()
        rover.setinangle(0)
        quarky.runtimedrobot("F",100,3)
      if predicted_class == "backward":
        rover.home()
        rover.setinangle(0)
        quarky.runtimedrobot("B",100,3)
      if predicted_class == "left":
        rover.home()
        rover.setinangle(40)
        quarky.runtimedrobot("L",100,3)
      if predicted_class == "right":
        rover.home()
        rover.setinangle(40)
        quarky.runtimedrobot("R",100,3)
      if predicted_class == "stop":
        quarky.stoprobot()
    else:
      quarky.stoprobot()

# Do not change
###############################################

#This is the while loop block, computations happen here
# Do not change

while True:
  pose.analysehand()                                             # Using Posenet to analyse hand pose
  coordinate_xy=[]
    
    # for loop to iterate through 21 points of recognition
  for i in range(21):
    if(pose.gethandposition(1,i,0)!="NULL"  or pose.gethandposition(2,i,0)!="NULL"):
      coordinate_xy.append(int(240+float(pose.gethandposition(1,i,0))))
      coordinate_xy.append(int(180-float(pose.gethandposition(2,i,0))))
    else:
      coordinate_xy.append(0)
      coordinate_xy.append(0)
            
  coordinate_xy_tensor = tf.expand_dims(coordinate_xy, 0)        # Expanding the dimension of the coordinate list
  predict=model.predict(coordinate_xy_tensor)                    # Making an initial prediction using the model
  predict_index=np.argmax(predict[0], axis=0)                    # Generating index out of the prediction
  predicted_class=class_list[predict_index]                      # Tallying the index with class list
  print(predicted_class)
  runQuarky(predicted_class)
    
  # Do not change

Logic

  1. If the identified class from the analyzed image is “forward,” the Mars Rover will move forward at a specific speed.
  2. If the identified class is “backward,” the Mars Rover will move backward.
  3. If the identified class is “left,” the Mars Rover will move left.
  4. If the identified class is “right,” the Mars Rover will move right.
  5. Otherwise, the Mars Rover will be in the home position , stopped.
def runQuarky(predicted_class):
    if pose.ishanddetected():
      if predicted_class == "forward":
        rover.home()
        rover.setinangle(0)
        quarky.runtimedrobot("F",100,3)
      if predicted_class == "backward":
        rover.home()
        rover.setinangle(0)
        quarky.runtimedrobot("B",100,3)
      if predicted_class == "left":
        rover.home()
        rover.setinangle(40)
        quarky.runtimedrobot("L",100,3)
      if predicted_class == "right":
        rover.home()
        rover.setinangle(40)
        quarky.runtimedrobot("R",100,3)
      if predicted_class == "stop":
        quarky.stoprobot()
    else:
      quarky.stoprobot()

Output

Read More
Discover the exciting world of face-tracking robots and learn how to code one using sensors and algorithms.

Introduction

A face-tracking robot is a type of robot that uses sensors and algorithms to detect and track human faces in real time. The robot’s sensors, such as cameras or infrared sensors, capture images or videos of the surrounding environment and use computer vision techniques to analyze the data and identify human faces.

Face-tracking robots have many potential applications, including in security systems, entertainment, and personal robotics. For example, a face-tracking robot could be used in a museum or amusement park to interact with visitors, or in a home as a companion robot that can recognize and follow the faces of family members.

One of the most fascinating activities is face tracking, in which the Humanoid can detect a face and move its head in the same direction as yours. How intriguing it sounds, so let’s get started with the coding for a face-tracking Humanoid robot.

Logic

  1. If the face is tracked at the center of the stage, the Humanoid should be straight.
  2. As the face moves to the left side, the Humanoid will also move to the left side.
  3. As the face moves to the right side, the Humanoid will also move to the right side.

Code Explained

  1. Drag and drop the when green flag clicked block from the Events palette.
  2. Then, add a turn () video on stage with () % transparency block from the Face Detection extension and select one from the drop-down. This will turn on the camera.
  3. Add the set head pin () FLeft () FRight () BLeft () BRight () block from the Humanoid extension.
  4. Click on the green flag and your camera should start. Make sure this part is working before moving further.
  5. Add the forever block below turn () video on stage with () % transparency from the Control palette.
  6. Inside the forever block, add an analyzed image from the () block. This block will analyze the face the camera detects. Select the camera from the dropdown.
  7. Create a variable called Angle that will track the angle of the face. Based on the angle, the robot will move to adjust its position.
  8. Here comes the logical part as in this, the position of the face on the stage matters a lot. Keeping that in mind, we will add the division () / () block from the Operator palette into the scripting area.
  9. Place get () of the face () at the first place of addition () + (), and 3 at the second place. From the dropdown select X position.
  10. If the angle value is greater than 90, the Humanoid will move left at a specific speed. If the angle is less than 90, the Humanoid will move right at a specific speed. If the angle is exactly 90, the Humanoid will return to its home position.

Block Explained

  1. Create a variable called Angle and assign it the value of the face’s position.
  2. At the center of the stage, we will get the X position value which is zero.
  3. As we move to the left side the X position value will give you the negative value and as we move to the right side the X position value will give you the positive value.
  4. The x position value is divided by 3 which gives precise positioning.
  5. To set the angle at 90 when the face is at the center of the stage we have added 90 to the X position value.
  6. As we move to the left side the angle value will get decreased as the X position value is going in negative.
  7. As we move to the right side the angle value will get increased as the X position value is going in positive.

Code

Output

Our next step is to check whether it is working right or not. Whenever your face will come in front of the camera, it should detect it and as you move to the right or left, the head of your  Humanoid robot should also move accordingly.

Read More
In this activity, learn how to create a new Machine Learning model that will be able to identify and detect different types of hand poses and that can help us to control the Mecanum Gripper Robot.

In this activity, we will try to create a new Machine Learning model that will be able to identify and detect different types of hand poses and that can help us to control the Mecanum Gripper Robot. This activity can be quite fun and by knowing the process, you can develop your own customized hand pose classifier model easily!

We will use the same model that we have created in the previous Hand Pose Controlled Mecanum model to avoid any misdirection and confusion.

Note: You can always create your own model and use it to perform any type of functions as per your choice. This example proves the same point and helps you understand well the concept of Machine Learning models and environment.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the Block coding environment as appropriate Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.
Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.
Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Logic

The mecanum will move according to the following logic:

  1. If the detected class is “forward”, we will make the Mecanum move forward.
  2. When the backward gesture is detected – Mecanum will move backwards.
  3. When the Lateral Left gesture is detected – Mecanum will move towards the left direction laterally with the help of its omnidirectional wheels.
  4. When the Lateral Right gesture is detected – Mecanum will move towards the right direction laterally with the help of its omnidirectional wheels.
  5. When the Normal Right gesture is detected – Mecanum will rotate on a single point towards the right direction.
  6. When the Normal Left gesture is detected – Mecanum will rotate on a single point towards the left direction.
  7. When the “Stop” class gesture is detected, we will use the gripper functions of the Mecanum and Pick the object.
  8. When the “Circular Motion ” class is detected, we will use the gripper functions of the Mecanum and Drop the object by opening the arms of the gripper robot.

Code

Initialization

Main Code

 

Output

Forward-Backward Motion:

Circular Right-Left Motion:

Lateral Right-Left Motions:

Gripper Mechanism with Hand Gestures:

Read More
In this activity, learn how to create a new Machine Learning model that will be able to identify and detect different types of hand poses and that can help us to control the Mecanum Gripper Robot.

In this activity, we will try to create a new Machine Learning model that will be able to identify and detect different types of hand poses and that can help us to control the Mecanum Gripper Robot. This activity can be quite fun and by knowing the process, you can develop your own customized hand pose classifier model easily!

We will use the same model that we have created in the previous Hand Pose Controlled Mecanum model to avoid any misdirection and confusion.

Note: You can always create your own model and use it to perform any type of functions as per your choice. This example proves the same point and helps you understand well the concept of Machine Learning models and environment.

Hand Gesture Classifier Workflow

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the Python coding environment as appropriate Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.
  6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:
Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.

Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Python Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Python Coding Environment if you have opened the ML Environment in the Python Coding.

Logic

The mecanum will move according to the following logic:

  1. If the detected class is “forward”, we will make the Mecanum move forward.
  2. When the backward gesture is detected – Mecanum will move backwards.
  3. When the Lateral Left gesture is detected – Mecanum will move towards the left direction laterally with the help of its omnidirectional wheels.
  4. When the Lateral Right gesture is detected – Mecanum will move towards the right direction laterally with the help of its omnidirectional wheels.
  5. When the Normal Right gesture is detected – Mecanum will rotate on a single point towards the right direction.
  6. When the Normal Left gesture is detected – Mecanum will rotate on a single point towards the left direction.
  7. When the “Stop” class gesture is detected, we will use the gripper functions of the Mecanum and Pick the object.
  8. When the “Circular Motion ” class is detected, we will use the gripper functions of the Mecanum and Drop the object by opening the arms of the gripper robot.

Code

Logical Code:

meca=Mecanum(1,2,7,8)
meca.initialisegripper(5)
meca.setcloseangle(90)
meca.setopenangle(0)

def runmecanum(predicted_class):
  if pose.ishanddetected():
    if predicted_class=="Forward":
      meca.runtimedrobot("forward",100,2)
    if predicted_class=="Backward":
      meca.runtimedrobot("backward",100,2)
    if predicted_class=="Stop":
      meca.closearm()
    if predicted_class=="LateralRight":
      meca.runtimedrobot("lateral right",100,2)
    if predicted_class=="LateralLeft":
      meca.runtimedrobot("lateral left",100,2)
    if predicted_class=="NormalRight":
      meca.runtimedrobot("circular right",100,1)
    if predicted_class=="NormalLeft":
      meca.runtimedrobot("circular left",100,1)
    if predicted_class=="CircularMotion":
      meca.openarm()

Final Code

  1. We will create a custom function as shown above that will help us to control the Gripper Mecanum robot easily with the help of the Machine Learning model that we created.
  2. We will also initialize the gripper pins and the mecanum pins before writing the main function. We can also set the gripper angle for closing and opening of the gripper with the help of “setgripperangle()” function.
  3. We will also have to call the function at the last line of the code where we will call the function that we have created for the code to run properly.
####################imports####################
# Do not change

import numpy as np
import tensorflow as tf
import time

# Do not change
####################imports####################

#Following are the model and video capture configurations
# Do not change

model=tf.keras.models.load_model(
    "num_model.h5",
    custom_objects=None,
    compile=True,
    options=None)
pose = Posenet()                                                    # Initializing Posenet
pose.enablebox()                                                    # Enabling video capture box
pose.video("on",0)                                                  # Taking video input
class_list=['Forward','Backward','Stop','LateralRight','LateralLeft','NormalRight','NormalLeft','CircularMotion']                  # List of all the classes
meca=Mecanum(1,2,7,8)
meca.initialisegripper(5)
meca.setcloseangle(90)
meca.setopenangle(0)

def runmecanum(predicted_class):
  if pose.ishanddetected():
    if predicted_class=="Forward":
      meca.runtimedrobot("forward",100,2)
    if predicted_class=="Backward":
      meca.runtimedrobot("backward",100,2)
    if predicted_class=="Stop":
      meca.closearm()
    if predicted_class=="LateralRight":
      meca.runtimedrobot("lateral right",100,2)
    if predicted_class=="LateralLeft":
      meca.runtimedrobot("lateral left",100,2)
    if predicted_class=="NormalRight":
      meca.runtimedrobot("circular right",100,1)
    if predicted_class=="NormalLeft":
      meca.runtimedrobot("circular left",100,1)
    if predicted_class=="CircularMotion":
      meca.openarm()
# Do not change
###############################################

#This is the while loop block, computations happen here
# Do not change

while True:
  pose.analysehand()                                             # Using Posenet to analyse hand pose
  coordinate_xy=[]
    
    # for loop to iterate through 21 points of recognition
  for i in range(21):
    if(pose.gethandposition(1,i,0)!="NULL"  or pose.gethandposition(2,i,0)!="NULL"):
      coordinate_xy.append(int(240+float(pose.gethandposition(1,i,0))))
      coordinate_xy.append(int(180-float(pose.gethandposition(2,i,0))))
    else:
      coordinate_xy.append(0)
      coordinate_xy.append(0)
            
  coordinate_xy_tensor = tf.expand_dims(coordinate_xy, 0)        # Expanding the dimension of the coordinate list
  predict=model.predict(coordinate_xy_tensor)                    # Making an initial prediction using the model
  predict_index=np.argmax(predict[0], axis=0)                    # Generating index out of the prediction
  predicted_class=class_list[predict_index]                      # Tallying the index with class list
  print(predicted_class)
  runmecanum(predicted_class)
  # Do not change

Output

Forward-Backward Motion:

Circular Right-Left Motion:

Lateral Right-Left Motions:

Gripper Mechanism with Hand Gestures:

Read More
Learn how to create a crawling motion with a quadruped robot using individual servo control.

Introduction

The project demonstrates how to make the crawling motion with Quadruped using individual servo control.


There are four positions of the robot we are going to make to create the crawling motion:

  1. Position 1

    quad.moveall([60,120,0,180,60,120,180,0],Speed)

     

  2. Position 2

    quad.moveall([30,150,60,120,60,120,120,60],Speed)

  3. Position 3

    quad.moveall([120,60,60,120,120,60,120,60],Speed)

  4. Position 4

    quad.moveall([120,60,0,180,150,30,180,0],Speed)

 

Code

sprite = Sprite('Tobi')
quarky = Quarky()

quad=Quadruped(4,1,8,5,3,2,7,6)

Speed = 250
while True:
	quad.moveall([60,120,0,180,60,120,180,0],Speed)
	quad.moveall([30,150,60,120,60,120,120,60],Speed)
	quad.moveall([120,60,60,120,120,60,120,60],Speed)
	quad.moveall([120,60,0,180,150,30,180,0],Speed)

Logic

  1. The “Quad” instance represents the quadruped robot and is initialized with several parameters that specify the location of each of its four legs.
  2. The “moveall” method takes a list of eight angles as its input, with each angle representing the angle of a joint in one of the quadruped’s legs.
  3. The method then uses these angles to calculate the position of each leg and moves them accordingly.
  4. The variable “Speed” is used to specify the speed at which the quadruped should move during each step of the loop.

Output

 

Read More
Learn how to set the bounding box threshold, and detect signals such as 'Go', 'TurnRight', 'TurnLeft', and 'Stop' to control quadruped movements.

Introduction

A sign detector Quadruped robot is a robot that can recognize and interpret certain signs or signals, such as hand gestures or verbal commands, given by a human. The robot uses sensors, cameras, and machine learning algorithms to detect and understand the sign, and then performs a corresponding action based on the signal detected.

These robots are often used in manufacturing, healthcare, and customer service industries to assist with tasks that require human-like interaction and decision-making.

Code

Logic

  1. Then, it sets up the quadruped robot’s camera to look for hand signs and tells it how to recognize different signs.
  2. Next, the code starts a loop where the robot looks for hand signs. If it sees a sign, it says the name of the sign out loud.
  3. Finally, if the robot sees certain signs (like ‘Go’, ‘Turn Left’, ‘Turn Right’, or ‘U Turn’), it moves in a certain direction (forward, backward, left, or backward) based on the sign it sees.
  4. So, this code helps a robot understand hand signs and move in response to them!

Output

Read More

Introduction

Welcome to the world of Synonyms and Antonyms! Here, you can effortlessly explore alternative words that share similar or opposite meanings to a given term. Expand your vocabulary and enhance your language skills by diving into the vast realm of synonyms and antonyms.

Discover the richness of language as you uncover words that convey similar meanings and delve into the contrasting concepts that evoke different emotions. With the ChatGPT extension, you’ll have access to an immersive and interactive experience. Simply ask for synonyms or antonyms, and ChatGPT will provide you with a variety of options, broadening your understanding of word relationships.

Code

 

Read More
Engage in interactive conversations with the AI assistant powered by ChatGPT and Sprite Tobi.

Introduction

The Chatbox with ChatGPT Extension is a versatile tool that enables developers to integrate AI-driven conversations into their applications. It leverages the power of the ChatGPT model to create interactive and intelligent chat experiences. With this extension, you can build chatbots, virtual assistants, or conversational agents that understand and respond to user inputs naturally.

The code creates a character named “Tobi” and uses speech recognition to understand spoken commands. It then asks a question to the AI assistant (ChatGPT) and displays the response on the screen, converts it into speech, and makes the character “Tobi” speak the response.

Code

sprite = Sprite('Tobi')
gpt = ChatGPT()

speech = TexttoSpeech()

sr = SpeechRecognition()
ts = TexttoSpeech()
sr.analysespeech(4, "en-US")
command = sr.speechresult()
answer = command.lower()

# sprite.input("Provide a valid word")
# answer= str(sprite.answer())


gpt.askOnChatGPT("AIAssistant", answer)
# Ask on chatGPT other OPTION "friendAI" "sarcasticAI"
result=gpt.chatGPTresult()
print(result)
speech.speak(result)

sprite.say(result,5)

Logic

  1. Open PictoBlox and create a new file.
  2. Choose a suitable coding environment for Block-based coding.
  3. We create an instance of  Speech recognition. This class allows us to convert spoken audio into text.
  4. Next, we create an instance of the ChatGPT model called gpt. ChatGPT is a language model that can generate human-like text responses based on the input it receives. 
  5. Recognize speech for 5 seconds using recognize speech for ()s in the () block.
  6. The sprite object has already been initialized.
  7. We analyze the spoken speech for 4 seconds and assume it is in the English language using analysespeech().
  8.  We store the recognized speech as text in the variable command using speechresult().
  9. We convert the recognized speech to lowercase and store it in the variable answer.
  10. We ask the AI model ChatGPT (acting as an AI assistant) a question based on the user’s input stored in the answer.
  11. We retrieve the response from the AI model and store it in the variable result using chatGPTresult().
  12. We display the response on the screen using print(result).
  13.  We convert the response into speech and play it aloud using the speech object using speak().
  14.  The character “Tobi” speaks the response for 5 seconds using say().
  15. Press the Run button to run the code.

Output

I asked ChatGPT for a joke, and it responded with an interesting response.

Read More
The examples show how to use pose recognition in PictoBlox to make jumping jack counter.

Introduction

In this example project, we are going to create a machine learning model that can count the number of jumping jack activities from the camera feed.

Pose Classifier in Machine Learning Environment

The pose Classifier is the extension of the ML Environment used for classifying different body poses into different classes.

The model works by analyzing your body position with the help of 17 data points.

Pose Classifier Workflow

  1. Open PictoBlox and create a new file.
  2. You can click on “Machine Learning Environment” to open it.
  3. Click on “Create New Project“.
  4. A window will open. Type in a project name of your choice and select the “Pose Classifier” extension. Click the “Create Project” button to open the Pose Classifier window.
  5. You shall see the Pose Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Pose Classifier

Class is the category in which the Machine Learning model classifies the poses. Similar posts are put in one class.

There are 2 things that you have to provide in a class:

  1. Class Name: The name to which the class will be referred.
  2. Pose Data: This data can be taken from the webcam or uploaded from local storage.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:

Training the Model

After data is added, it’s fit to be used in model training. To do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to predict previously unseen data.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Script

The idea is simple, after running code we will do jumping jack activity in front of camera and tobi sprite will say counting of jumping jack.

  1. Select the Tobi sprite.
  2. We’ll start by adding a when flag clicked block from the Events palette.
  3. We made the new variable “count” by choosing the “Make a Variable” option from the Variables palette.
  4. Also we made the new variable “temp” by choosing the “Make a Variable” option from the Variables palette.
  5. Add “forever” from the Control palette.
  6. Inside the “forever” block, add an “analysis image from ()” block from the Machine Learning palette. Select the Web camera option.
  7. Inside the “forever” block, add an “if () then” block from the Control palette.
  8. In the empty place of the “if () then” block, add an “key () pressed?” block from the Sensing palette. Select the ‘q’ key from the options.
  9. Inside the “if () then” block, add the “Set () to ()” block from the Variables palette. Select the count option at the first empty place, and for the second, write a 0 value.
  10. Also add the “Set () to ()” block from the Variables palette. Select the temp option at the first empty place, and for the second, write a 0 value.
  11. Inside the “forever” block, add an new “if () then” block from the Control palette.
  12. In the empty place of the “if () then” block, add an “is identified class ()” block from the Machine Learning palette. Select the ‘Upper hand‘ option from the options.
  13. Inside the “if () then” block, add the “Set () to ()” block from the Variables palette. Select the temp option at the first empty place, and for the second, write a 1 value.
  14. Inside the “forever” block, add an new “if () then” block from the Control palette.
  15. In the empty place of the “if () then” block, add an “is identified class ()” block from the Machine Learning palette. Select the ‘Down hand‘ option from the options.
  16. Inside the “if () then” block, add the another “if () then” block from the Control palette.
  17. In the empty place of the “if () then” block, add a condition checking block from the operators palette block. At the first empty place, put the temp variable from the variables palette, and at the second place, write a 1 value.
  18. Inside the “if () then” block, add the “Set () to ()” block from the Variables palette. Select the count option at the first empty place, and for the second, write a 1 value.
  19. Also add the “Set () to ()” block from the Variables palette. Select the temp option at the first empty place, and for the second, write a 0 value
  20. Inside the “if () then” block, add an “say () for () seconds” block from the Looks palette block. At the first empty place, add the “join () ()” block from operator palette and at the second place, write a 2 value.
  21. Inside “join () ()” block at the first empty place, write the appropriate statement and at the second place, add count variable from Variables palette.

    Final Output

     

Read More
Explore the functionality of a raindrop sensor, an analog-type sensor that detects changes in resistance upon contact with water.
introduction

The raindrop sensor is an analog-type sensor that effectively measures changes in resistance when it encounters water. This property makes it an ideal choice for detecting rain and water presence in various applications. While typically designed with two pins, there are also versions available with a controller module, effectively converting it into a three-pin sensor for enhanced functionality.

rain drop sensor

Circuit

 

To set up the raindrop sensor circuit, make the following connections:

  • Sensor:
    • VCC: Connect to the 5V power supply
    • GND: Connect to ground (GND)
    • A0: Connect to analog input pin A2
  • Buzzer:
    • Buzzer+: Connect to digital pin D2
    • Buzzer-: Connect to ground (GND)

Code

  1. Add an “if-else” block from the controls palette.
  2. Insert a comparison operator into the “if” block from the operator palette
  3. Check whether the value of the raindrop sensor is below a certain threshold, let’s say 800. From the sensor palette of Quarky, add “read analog sensor () at pin () ” and place it in the blank space of the operator.
  4. If the value is below the set limit, activate the buzzer (alarm) connected to pin D2. Add set digital pin () output as () from the Quarky palette within the “if” block. In the “else” part, ensure the alarm remains off when the raindrop sensor value is above the set limit.
  5. Add the above set of code inside a “forever” block to continuously monitor the sensor’s readings.
  6. Finally, add a “when flag clicked” block at the start of the code to initiate the raindrop sensor monitoring.

Script

Output

 

 

Read More
All articles loaded
No more articles to load
[PictoBloxExtension]