[MarsRover]

MarsRover()

Function Definition: MarsRover(Head = 4, Front Left = 1, Front Right = 7, Back Left = 2, Back Right = 6)

Parameters

NameTypeDescriptionExpected ValuesDefault Value
HeadintServo Port Number at which the Head Servo Motor is connected. 1-84
Front LeftintServo Port Number at which the Front Left Servo Motor is connected. 1-81
Front RightintServo Port Number at which the Front Right Servo Motor is connected. 1-87
Back LeftintServo Port Number at which the Back Left Servo Motor is connected. 1-82
Back RightintServo Port Number at which the Back Right Servo Motor is connected. 1-86

Description

The function initializes the Mars Rover object in Python and maps the 5 servos to the specified pins.

By default the following configuration is added:

  1. Head Servo – 4
  2. Front Left Servo – 1
  3. Front Right Servo – 5
  4. Back Left Servo – 2
  5. Back Right Servo – 6

Example

The example demonstrates the ball falling in gravity in a box. Using the pen extension we are also tracking the trajectory of the ball.

The example demonstrates the ball falling in gravity in a box. Using the pen extension we are also tracking the trajectory of the ball.

Read More
In this example, we will analyze the effect of the bounce property of the sprite using a ball. We will track the ball using the pen extension and the color will change with the y velocity of the ball.

In this example, we will analyze the effect of the bounce property of the sprite using a ball. We will track the ball using the pen extension and the color will change with the y velocity of the ball.

The coefficient of restitution is the ratio of the final to the initial relative speed between two objects after they collide. In PictoBlox it is controlled with bounce property. It varies from 0 to 100%.

The following is the code we will use:

Let’s see how the code work will various bounce values:

  1. Bounce = 10
  2. Bounce = 60
  3. Bounce = 70
  4. Bounce = 80
  5. Bounce = 90
  6. Bounce = 99

Earth vs Moon

If we change the gravity with a bounce value equal to 90, we can simulate how the ball will work on the moon.

Read More
In this example, we are making a rocket shoot bullets to the mouse pointer when the space key is pressed. 

In this example, we are making a rocket shoot bullets to the mouse pointer when the space key is pressed.

Rocket Script

Bullet Script

Output

Read More
In this example, we simulate the covid experiment using the physics simulation.

In this example, we simulate the covid experiment using the physics simulation. Following is what we are doing:

  1. 21 humans are created with 20 green and 1 red ball.
  2. Gravity is set to 0 simulating zero gravity.
  3. The balls move at random speeds constantly creating random behavior.
  4. If the green balls touch the red balls, it turns red.
  5. After 10 seconds the red ball gets converted to green.

Script

Output

Read More
In this example, we are making a scrolling background that can be used in games.

In this example, we are making a scrolling background that can be used in games.

Tobi Script

Background Script

Output

Read More
In this example, we are making a scrolling background in 2 dimensions that can be used in games.

In this example, we are making a scrolling background in 2 dimensions that can be used in games.

Rocketship Script

Background 1 Script

Background 2 Script

 

Output

Read More
This example demonstrates the use of Physics simulation in a game where Tobi has to dodge the random ball coming towards it by jumping.

This example demonstrates the use of Physics simulation in a game where Tobi has to dodge the random ball coming towards it by jumping.

Tobi Script

Ball Script

Output

Read More
The example demonstrates how to set up the stage with obstacles and make a ball fall on a series of inclines.

The example demonstrates how to set up the stage with obstacles and make a ball fall on a series of inclines.

Line 1 Script

Line 2 Script

Ball Script

Floor Script

Output

Read More
This example demonstrates the use of Physics simulation in a game where Tobi has to navigate across the obstacles by spinning.

This example demonstrates the use of Physics simulation in a game where Tobi has to navigate across the obstacles by spinning.

Tobi Script

Banana Script

Pencil Script

Output

Read More
This example demonstrates the use of Physics simulation in a game where Tobi has to go to the apple by jumping on the slabs.

This example demonstrates the use of Physics simulation in a game where Tobi has to go to the apple by jumping on the slabs.

Tobi Script

Slab Script

Apple Script

Output

Read More
In this example, we are going to learn how to control the flame sensor to detect flame and start the exit alarm sequence that includes opening the gate, alarm beeping, and fan turned ON.

In this example, we are going to learn how to control the flame sensor to detect flame and start the exit alarm sequence that includes opening the gate, alarm beeping, and the fan being turned ON.

 

Read More
In this example, we will learn how to retrieve the temperature data from the ThingSpeak channel and display it on the Quarky display.

In this example, we will learn how to retrieve the temperature data from the ThingSpeak channel and display it on the Quarky display.

ThingSpeak Channel

Create a channel on ThingSpeak with 2 fields – Temperature and Humidity. Send some data to the channel.

 

Get Temperature Data

The following script reads the temperature data from the cloud and shows it on the Quarky display.

Read More
In this example, we will understand how to use HTTP requests to get weather data from OpenWeatherMap API.

In this example, we will understand how to use HTTP requests to get weather data from OpenWeatherMap API.

OpenWeatherMap Account

Create the account with the following method:

  1. Click on Create an Account on the website: https://openweathermap.org/
  2. Fill in the signup details.
  3. Click on Create Account.
  4. Fill the in the details about usage.
  5. Go to API and copy the API for further usage.

API Request

The following is the method provided by OpenWeatherMaps for fetching the weather data using the lat and longitude of the location.

On making the HTTP Request, we get the following data:

{
    "coord": {
        "lon": 77.4977,
        "lat": 27.2046
    },
    "weather": [
        {
            "id": 800,
            "main": "Clear",
            "description": "clear sky",
            "icon": "01d"
        }
    ],
    "base": "stations",
    "main": {
        "temp": 304.66,
        "feels_like": 303.14,
        "temp_min": 304.66,
        "temp_max": 304.66,
        "pressure": 1012,
        "humidity": 27,
        "sea_level": 1012,
        "grnd_level": 992
    },
    "visibility": 10000,
    "wind": {
        "speed": 1.09,
        "deg": 3,
        "gust": 1.17
    },
    "clouds": {
        "all": 0
    },
    "dt": 1666781571,
    "sys": {
        "country": "IN",
        "sunrise": 1666745753,
        "sunset": 1666786313
    },
    "timezone": 19800,
    "id": 1276128,
    "name": "Bharatpur",
    "cod": 200
}

The data is in JSON format, which we can use in PictoBlox.

PictoBlox Script

The following script makes the HTTP request and makes the Tobi say the temperature of the location.

Output

Read More
Learn how to program a quadruped robot to perform predefined actions using PictoBlox.

Introduction

In this project, we will explain how to run predefined actions for Quadruped. By the end of the tutorial, learners will have gained knowledge and practical experience in programming a quadruped robot and controlling its movements using PictoBlox.

Quadruped Actions

There are seventeen predefined actions for Quadruped in PictoBlox which can be accessed through do () action () times at () speed block.

Code

Click on the green flag to run the motion sequence.

Controls for Action Block

Using the do () action () times at () speed block, we can control the number of times the action has to be executed.

Code

Logic

  1. To set up the quadruped, you can drag and drop pins for each leg and hip into the initialization block using set pins FR Hip () FL Hip () FR Leg () FL Leg() BR Hip () BL Hip () BR Leg () BL Leg () block.
  2. To make the quadruped perform a pre-defined action, you can use the drag and drop do () action () times at () speed block and specify the number of times to act and at what speed.
  3. You can also use the drag and drop wait()seconds block to make the quadruped wait for a specific number of seconds.
  4. To return the quadruped to its starting position, you can drag and drop the home position block.

Output

Read More
The project demonstrates how to make an IR sensor-activated coke dispenser.

The project demonstrates how to make an IR sensoractivated coke dispenser.

Circuit

We are using 2 devices in this project:

  1. IR Sensor: The IR sensor provides information if there is an obstacle in front or not. The IR sensor connections are as follows:
    1. GND Pin connected to GND of the Quarky Expansion Board.
    2. VCC Pin connected to VCC of the Quarky Expansion Board.
    3. Signal Pin connected to D3 of the Quarky Expansion Board.
  2. The Water Pump Connected to the Relay: The water pump is turned on and off by a relay connected to a smart switch in the IoT house. When the relay is turned ON, the smart switch also turns ON, activating the water pump. The relay has the following connections:
    1. GND Pin connected to GND of the Quarky Expansion Board.
    2. VCC Pin connected to VCC of the Quarky Expansion Board.
    3. Signal Pin connected to Servo 8 of the Quarky Expansion Board.

Script

The script is simple. The relay turns ON when the IR sensor is active, else OFF.

Output

Read More
What is IoT
Explore the Internet of Things (IoT) Lifecycle and learn about how it works in the real world. Understand the basics of IoT, the cloud, and Adafruit IO. Create an account with Adafruit IO, learn about Feeds and Dashboards, and visualize data.

What is IoT?

The Internet of Things is the network of physical objects or “things” embedded with sensors, actuators, and internet connectivity, which enables these objects to collect and exchange data.

A “Thing” in the context of the IoT, is an entity or physical object that has a Unique identifier, which is a unique sequence of characters used to identify or refer to it, combined with an embedded system, and has the ability to transfer data over the internet.

These devices collect useful data and then autonomously flow and share it between other devices.

What is IoT Lifecycle?

Now we know in brief, what is IoT, let’s understand how the IoT ecosystem works in the real world using what is known as an IoT lifecycle.

The IoT life cycle comprises systems for

  1. Collection: Let’s start with the first phase i.e. the collection. For any device or any system to perform any action, first, it needs some data to act on. This data can either be generated from any sensors or any IoT devices about the thing. The data generated can be from any sensor be it temperature sensors, motion sensors, moisture sensors, air quality sensors, light sensors, you name it.
  2. Communication: Next, onto the Communication phase. The data collected from the sensors is then sent to the Internet to some destination with security and reliability. The devices like routers, switches, etc. are used to send data across the destination devices. The destination devices could be
    1. A cloud platform like Google or Alexa
    2. Private data centers like Indian Defence Data Centers
    3. Home networks like Smart-home networks.
  3. Analyzing: In the next phase, that is the Analysis phase, the data we collected needs to be analyzed to create a meaningful format. It could be
    1. Visualizing the data like temperature variation during the day
    2. Building reports to analyze the manufacturing system cause
    3. Setting up events like should the Air Conditioner is ON or OFF depending on the temperature.
  4. Acting: Now that we have the final form of data we need to perform an action according to it. The actions based on the information and data could be
    1. Communicating with another machine- like turning the AC ON or OFF
    2. Sending a notification (SMS, E-mail, or Text)- like notifying that the plants have been watered.
    3. And much more.

Application of IoT

We use IoT in normally all fields of life.

  1. Building and Home Automation: Smart thermostats that can be monitored and adjusted remotely to maintain an optimal temperature in the home.
  2. Manufacturing: Automated production lines that use sensors to detect and measure the quality of goods being produced.
  3. Medical and Healthcare Systems: Wearable medical devices that can monitor and transmit vital health data to medical personnel in real time.
  4. Environmental Monitoring: Sensors that measure air quality and temperature in various regions, and can alert authorities to potential hazards.
  5. Energy Management: Smart meters that measure energy usage in a home or business and allow for energy efficiency adjustments to be made remotely.
  6. Transportation: Connected cars that can inform drivers of traffic conditions, find the best routes, and even self-park.
  7. Better quality of life for the elderly: Wearable devices that track and monitor the physical activity and vital signs of elderly individuals, and can alert family members or medical personnel if a potential health emergency arises.

What is the Cloud?

In an IoT system, the most important component is the cloud service on which we can store or retrieve data as per the applications. A cloud service is any service made available to users on demand through the internet. Cloud services are designed to provide easy & scalable access to applications, resources, and services. Cloud is the collection of data servers used to provide services like computing, analyzing, networking, etc.

There are a number of cloud service providers out there, like Amazon, Microsoft, Salesforce, Apple, etc. One such cloud service is Adafruit.io which we are going to use.What is Cloud

IoT with Adafruit IO

Adafruit.io is a cloud service – that just means we run it for you and you don’t have to manage it. You can connect to it over the Internet. It’s meant primarily for storing and then retrieving data but it can do a lot more than just that!

PictoBlox supports the IoT applications for Adafruit IO in this extension.

Create an Account in Adafruit IO

Follow the steps:

  1. Go to the website and Sign up: https://accounts.adafruit.com/users/sign_in
  2. Add the details and click on Create Account.
  3. You will be signed in to the account.

From here, you’ll want to learn about two important features of Adafruit IO before proceeding further –  Feeds and Dashboards

Feeds

Feeds are the core of the Adafruit IO system. The feed holds metadata about the data you push to Adafruit IO. This includes settings for whether the data is public or private, what license the stored sensor data falls under, and a general description of the data. The feed also contains the sensor data values that get pushed to Adafruit IO from your device.

You will need to create one feed for each unique source of data you send to the system. For example, if you have a project with one temperature sensor and two humidity sensors, you would need to create three feeds. One feed for the temperature sensor, and one feed for each humidity sensor.

Creating a Feed

Follow the steps:

  1. Go to the Feeds tab. Click on New Feed.
  2. Add the name and description and click on Create.
  3. You will find the feed in the list.
  4. You can click on the feed name and visualize the data.
Read More

In this activity, we will make the computer program that controls the Mars Rover. It’s like a remote-control car. You can press different keys on the keyboard to make the Mars Rover move forward, backward, turn left and turn right.

Motor and Servo Motor

In our Mars rover, there are a total of 6 motors and 5 servo motors. 

The motors provide rotation to the wheels which helps the rover to attain motion in both forward and backward directions. All the left side motors (3 motors) are connected to the left motor port of Quarky and all the right side motors (3 motors) are connected to the right motor port of Quarky using a 3 port wire. This means that to control the Mars rover we have to control only 2 motors – Left and Right.

Also, there are 2 parameters to control – Direction (Forward or Backward) and Speed. With this control, the Mars rover can do all the desired motions.

The servo motors help in providing rotation to the complete wheel assembly so that the rover can change its wheel alignments and its path. These play a major role in turning cases of the Mars Rover.

We will need to turn the servo motors to the Inside Servo Position to make Mars Rover turn left and right.

Python Code

  1. We will make sure we have initialized Quarky and Mars Rover correctly.
  2. We will use the while loop to continuously detect the keys of the keyboard and react according to the way.
  3. We will use the python function of sprite “.iskeypressed()” to know which key has been pressed and act respectively.
  4. When the up arrow key is pressed, we will set all the servos to 90 degree angle with the help of the rover.home() function. We will run the mars rover with the help of quarky.runtimedrobot() function.
  5. With the same set of functions we will create the structure for when the users press the backward key to create the respective motion.
  6. When the user presses the right or left arrow key, we will set the servo angles to a specific degree – 40 to complete the turn successfully.
  7. By this way we will be able to turn the rover according to our needs by using the runtimedrobot()  function.
sprite=Sprite('Tobi')
import time
quarky = Quarky()
rover = MarsRover(4, 1, 7, 2, 6)
while True:
  if sprite.iskeypressed("up arrow"):
	  rover.home()
	  rover.setinangle(0)
	  quarky.runtimedrobot("F",100,3)
	
  if sprite.iskeypressed("down arrow"):
    rover.home()
    rover.setinangle(0)
    quarky.runtimedrobot("B",100,3)
  
  if sprite.iskeypressed("right arrow"):
    rover.home()
    rover.setinangle(40)
    quarky.runtimedrobot("R",100,3)
  
  if sprite.iskeypressed("left arrow"):
    rover.home()
    rover.setinangle(40)
    quarky.runtimedrobot("L",100,3)

Output

Read More

This project of obstacle avoidance is for a robot that will move around and look for obstacles. It uses an ultrasonic sensor to measure the distance. If the distance is less than 20 cm, it will stop and look in both directions to see if it can move forward. If it can, it will turn left or right. If not, it will make a U-turn.

Logic

  1. This code is making a robot move around and explore its surroundings. It has an ultrasonic sensor that can measure the distance between objects.
  2. We will first initialize the servos of the Mars Rover with the block “Set head pins()”.
  3. Then we will make all the servos rotate to 90 degrees if they are not initialized.
  4. Thereafter we will initialize the ultrasonic sensors and define the minimum and maximum distance variables.
  5. The main logic of the code is that it first checks whether the distance is less than the minimum distance. If it is, the head servo will move to 45 degrees and check again if the distance is greater than the maximum distance, hence moving in the right direction.
  6. The robot with the help of the head servo, will check the distance for the conditions 90 degrees, 45 degrees, 135 degrees, 0 degrees and 180 degrees in the same order as stated.
  7. Whenever the distance measured will be less than minimum distance the head servo will change the direction to the next set of degree to check distance.
  8. In the last case scenario where all the angles contain obstacles as such, in that case the robot will change its direction to reverse by rotating to 180 degrees. By this way the robot will be able to navigate its own way through each and every obstacles.
Read More

This project of obstacle avoidance is for a robot that will move around and look for obstacles. It uses an ultrasonic sensor to measure the distance. If the distance is less than 20 cm, it will stop and look in both directions to see if it can move forward. If it can, it will turn left or right. If not, it will make a U-turn.

Logic

  1. This code is making a robot move around and explore its surroundings. It has an ultrasonic sensor that can measure the distance between objects.
  2. We will first initialize the servos of the Mars Rover with the block “Set head pins()”.
  3. Then we will make all the servos rotate to 90 degrees if they are not initialized.
  4. Thereafter we will initialize the ultrasonic sensors and define the minimum and maximum distance variables.
  5. The main logic of the code is that it first checks whether the distance is less than the minimum distance. If it is, the head servo will move to 45 degrees and check again if the distance is greater than the maximum distance, hence moving in the right direction.
  6. The robot with the help of the head servo, will check the distance for the conditions 90 degrees, 45 degrees, 135 degrees, 0 degrees and 180 degrees in the same order as stated.
  7. Whenever the distance measured will be less than minimum distance the head servo will change the direction to the next set of degree to check distance.
  8. In the last case scenario where all the angles contain obstacles as such, in that case the robot will change its direction to reverse by rotating to 180 degrees. By this way the robot will be able to navigate its own way through each and every obstacles.

Code:

Main Initialization:

Main Logic:

Final Condition Check:

Read More

Introduction

Dance motion with humanoid refers to using a robot that has a human-like appearance to perform dance movements. These robots are pre-programmed with various dance sequences and can also be customized to create unique dance routines.

To make the robot move, we need to use code to control its motors and servos. The code can be created using a programming tools/language such Pictoblox, Python, or Arduino. The code tells the robot which movements to make, such as lifting its arms, bending its knees, or spinning around.

Different actions can be used to create different dance moves, and the dance can be accompanied by music or sound effects. The robot can also be programmed to display different colors or patterns on its body as it moves.

Humanoid robots is a fun and creative way to explore the intersection between technology and the arts.

Code

Logic

  1. Here, we use the pre-defined dance and sequence of humanoid in our code.
  2. To begin, we first initialize the humanoid extension and set up all the required pins by dragging and dropping the necessary blocks.
  3. We use a forever loop to continuously play the dance sequence along with different sounds and display matrices.
  4. To make the dance sequence more interesting, we use different actions with the ‘do() action()() times() speed’ block. It’s quite fascinating.
  5. You can even try out your own dance movements by using different actions and adding your own creativity.

Output

Read More
Learn how to code logic for speech recognized control of Mars Rover with this example block code. You will be able to direct your own Mars Rover easily by just speaking commands.

Learn how to code logic for speech-recognized control of Mars Rover with this example block code. You will be able to direct your own Mars Rover easily by just speaking commands.

Introduction

A speech-recognized controlled Mars Rover robot is a robot that can recognize and interpret our speech, and verbal commands, given by a human. The code uses the speech recognition model that will be able to record and analyze your speech given and react accordingly on the Mars Rover.

Speech recognition robots can be used in manufacturing and other industrial settings to control machinery, perform quality control checks, and monitor equipment.

They are also used to help patients with disabilities to communicate with their caregivers, or to provide medication reminders and other health-related information.

Code

sprite=Sprite('Tobi')
import time
rover = MarsRover(4, 1, 7, 2, 6)
quarky = Quarky()
sr = SpeechRecognition()
ts = TexttoSpeech()
sr.analysespeech(4, "en-US")
command = sr.speechresult()
command = command.lower()
if 'forward' in command:
  rover.home()
  rover.setinangle(0)
  quarky.runtimedrobot("F",100,3)
elif 'back' in command:
  rover.home()
  rover.setinangle(0)
  quarky.runtimedrobot("B",100,3)
elif 'right' in command:
  rover.home()
  rover.setinangle(40)
  quarky.runtimedrobot("R",100,3)
elif 'left' in command:
  rover.home()
  rover.setinangle(40)
  quarky.runtimedrobot("L",100,3)

time.sleep(10)
sprite.stopallsounds()

Logic

  1. Firstly, the code initializes the Mars Rover pins and starts recording the microphone of the device to store the audio command of the user.
  2. The code then checks conditions whether the command included the word “Go” or not. You can use customized commands and test for different conditions on your own.
  3. If the first condition stands false, the code again checks for different keywords that are included in the command.
  4. When any condition stands true, the robot will align itself accordingly and move in the direction of the respective command.

Output

Forward-Backward Motions:

Right-Left Motions:

Read More
Learn about AI-based face expression detection, a technology that uses artificial intelligence algorithms and computer vision techniques to analyze images or videos of human faces and recognize emotions or expressions.

Introduction

AI-based face expression detection refers to the use of artificial intelligence algorithms and computer vision techniques to analyze images or videos of human faces and recognize the emotions or expressions being displayed. The technology can detect and analyze subtle changes in facial features, such as eye movement, mouth shape, and eyebrow position, to determine whether a person is happy, sad, angry, surprised, or expressing other emotions.

Discover the various fields that utilize this technology, including psychology, marketing, and human-computer interaction. Additionally, read about the logic and code behind face detection with a camera feed, including the initialization of parameters, face detection library, loop execution, and if-else conditions. Explore how the technology continuously analyzes emotions, and how the humanoid responds with different facial expressions and movements.

Code

sprite = Sprite('Tobi')

fd = FaceDetection()

quarky = Quarky()

import time

humanoid = Humanoid(7, 2, 6, 3, 8, 1)

# Turn the video ON with 0% transparency
fd.video("ON", 0)
fd.enablebox()

# Run this script forever
while 1:
  fd.analysecamera()          # Analyse image from camera 
  sprite.say(fd.expression()) # Say the face expressions
  
  if fd.isexpression(1, "happy"): # if face expression is happy
    quarky.showemotion("happy")   # show happy emotion on Quarky
    humanoid.action("dance2", 1000, 1)
    
  if fd.isexpression(1, 'sad'):
    quarky.showemotion("crying")
    humanoid.action("updown", 1000, 1)
    
  if fd.isexpression(1, 'surprise'):
    quarky.showemotion('surprise')
    humanoid.action("moonwalker", 1000, 1)
    
  if fd.isexpression(1, 'angry'):
    quarky.showemotion('angry')    
    humanoid.action("flapping2", 1000, 1)
  else:
    humanoid.home()
    
# Comment the above script, uncomment the below script and 
# run this script to clear the stage and quarky display

fd.disablebox()
fd.video("off")    
quarky.cleardisplay()

Logic

The example demonstrates how to use face detection with a camera feed. Following are the key steps happening:

  1. The code is using face detection to recognize facial expressions and control a humanoid and a display device called Quarky accordingly.
  2. Then, the program turns on the video with 0% transparency and enables the bounding box for face detection.
  3. The code then enters an infinite loop where it continuously analyzes the image from the camera using face detection and says the detected facial expressions.
  4. The code then checks if the expression is happy, sad, surprised, or angry using the if statement. If the expression is happy, the Quarky device displays a happy emotion, and the humanoid performs the “dance2” action for specific time. Similarly, for sad, surprised, and angry expressions, Quarky displays the respective emotion, and the humanoid performs the associated action.
  5. If no facial expression is detected, the humanoid is set to its “home” position. Finally, if the program needs to be stopped.

Output

Read More
Learn how to control the Mecanum using PictoBlox with keyboard inputs. Make the Mecanum move forward, backward, turn left, and turn right along with unique lateral motions!

In this activity, we will make the computer program that controls the Mecanum Robot. It’s like a remote-control car. You can press different keys on the keyboard to make the Mecanum move forward, backward, left and right.

The Quarky Mecanum Wheel Robot is a type of robot that uses a special type of wheel to move. The wheel is made of four rollers mounted at 45-degree angles to the wheel‘s hub. Each roller has its own motor and can spin in either direction. This allows the wheel to move in any direction, making it an ideal choice for navigating around obstacles and tight spaces. The mecanum wheel robot can also turn on the spot, allowing it to make sharp turns without having to reverse direction.

 

Coding Steps

Follow the steps:

  1. Open a new project in PictoBlox.
  2. Connect Quarky to PictoBlox.
  3. Click on the Add Extension button and add the Quarky Mecanum extension.
  4. Now we will first initialize the Mecanum robots and the servos before starting the main code.
  5. The main code will consist nested if-else conditions that will check specific conditions on which key is pressed, and will react accordingly. We will use the arrow keys for basic movements (Forward, Backward, Left Right) and the keys “a” for lateral left movement and “d” for lateral right movement.

Code

Output

Forward-Backward Motion:

Lateral Right-Left Motion:

Circular Right-Left Motion:

Read More
Learn how to control the Mecanum using PictoBlox with keyboard inputs using Python. Make the Mecanum move forward, backward, turn left, and turn right along with unique lateral motions!

In this activity, we will make the computer program that controls the Mecanum Robot. It’s like a remote-control car. You can press different keys on the keyboard to make the Mecanum move forward, backward, left, and right.

The Quarky Mecanum Wheel Robot is a type of robot that uses a special type of wheel to move. The wheel is made of four rollers mounted at 45- degree angles to the wheel‘s hub. Each roller has its own motor and can spin in either direction. This allows the wheel to move in any direction, making it an ideal choice for navigating around obstacles and tight spaces. The Mecanum wheel robot can also turn on the spot, allowing it to make sharp turns without having to reverse direction.

Coding Steps

Follow the steps:

  1. Open a new project in PictoBlox.
  2. Connect Quarky to PictoBlox.
  3. Click on the Add Extension button and add the Quarky Mecanum extension.
  4. Now we will first initialize the Mecanum robots and the servos before starting the main code.
  5. The main code will consist of nested if-else conditions that will check specific conditions on which key is pressed and will react accordingly. We will use the arrow keys for basic movements (Forward, Backward, Left Right) and the keys “a” for lateral left movement and “d” for lateral right movement.

Code

sprite=Sprite('Tobi')
import time
quarky = Quarky()
robot = Mecanum(1, 2, 7, 8)
while True:
  if sprite.iskeypressed("up arrow"):
	  robot.runtimedrobot("forward",100,2)
	
  if sprite.iskeypressed("down arrow"):
    robot.runtimedrobot("backward",100,1)
  
  if sprite.iskeypressed("right arrow"):
    robot.runtimedrobot("circular right",70,1)
  
  if sprite.iskeypressed("left arrow"):
    robot.runtimedrobot("circular left",70,1)
  
  if sprite.iskeypressed("a"):
    robot.runtimedrobot("lateral left",100,1)
  if sprite.iskeypressed("d"):
    robot.runtimedrobot("lateral right",100,1)

Output

Forward-Backward Motion:

Lateral Right-Left Motion:

Circular Right-Left Motion:

Read More
In this tutorial, you will learn how to control a quadruped robot using arrow keys.

Introduction

In this example, we will make the computer program that controls a “quadruped” (a four-legged robot). It’s like a remote control car, except with four legs instead of four wheels. You can press different keys on the keyboard to make the quadruped move forward, backward, turn left and turn right.

Logic

The Quadruped will move according to the following logic:

  1. Quadruped will move forward When the “UP” key is pressed.
  2. Quadruped will move backward When the “DOWN” key is pressed.
  3. Quadruped will turn left When the “LEFT” key is pressed.
  4. When the “RIGHT” key is pressed – Quadruped will turn right.

Code

The program uses the up, down, left, and right arrows to control the robot and make it move forward, backward, left, and right. Every time you press one of the arrows, Quadruped will move in the direction you choose for specific steps.

sprite = Sprite('Tobi')
quarky = Quarky()
import time

quad=Quadruped(4,1,8,5,3,2,7,6)
quad.home()

while True:
  if sprite.iskeypressed("up arrow"):
    quad.move("forward",1000,1)
    time.sleep(1)
    
  if sprite.iskeypressed("down arrow"):
    quad.move("backward",1000,1)
    
  if sprite.iskeypressed("left arrow"):
    quad.move("turn left",1000,1)
    
  if sprite.iskeypressed("right arrow"):
    quad.move("turn right",1000,1)

Output

Read More
Learn to demonstrate how to use ML Environment to make a model that identifies the hand gestures and makes the Quadruped move accordingly.

Introduction

This project demonstrates how to use Machine Learning Environment to make a machinelearning model that identifies hand gestures and makes the quadruped move accordingly.

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Python Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Python Coding Environment if you have opened the ML Environment in Python Coding.

code

The following code appears in the Python Editor of the selected sprite.

####################imports####################
# Do not change

import numpy as np
import tensorflow as tf
import time
quarky=Quarky

quad=Quadruped(4,1,8,5,3,2,7,6)

# Do not change
####################imports####################

#Following are the model and video capture configurations
# Do not change

model=tf.keras.models.load_model(
    "num_model.h5",
    custom_objects=None,
    compile=True,
    options=None)
pose = Posenet()                                                    # Initializing Posenet
pose.enablebox()                                                    # Enabling video capture box
pose.video("on",0)                                                  # Taking video input
class_list=['Forward','Backward','Left','Right','Stop']                  # List of all the classes

# Do not change
###############################################

#This is the while loop block, computations happen here
# Do not change

while True:
  pose.analysehand()                                             # Using Posenet to analyse hand pose
  coordinate_xy=[]
    
    # for loop to iterate through 21 points of recognition
  for i in range(21):
    if(pose.gethandposition(1,i,0)!="NULL"  or pose.gethandposition(2,i,0)!="NULL"):
      coordinate_xy.append(int(240+float(pose.gethandposition(1,i,0))))
      coordinate_xy.append(int(180-float(pose.gethandposition(2,i,0))))
    else:
      coordinate_xy.append(0)
      coordinate_xy.append(0)
            
  coordinate_xy_tensor = tf.expand_dims(coordinate_xy, 0)        # Expanding the dimension of the coordinate list
  predict=model.predict(coordinate_xy_tensor)                    # Making an initial prediction using the model
  predict_index=np.argmax(predict[0], axis=0)                    # Generating index out of the prediction
  predicted_class=class_list[predict_index]                      # Tallying the index with class list
  print(predicted_class)

    
  # Do not change

Logic

  1. If the identified class from the analyzed image is “forward,” the Quadruped will move forward at a specific speed.
  2. If the identified class is “backward,” the Quadruped will move backward.
  3. If the identified class is “left,” the Quadruped will move left.
  4. If the identified class is “right,” the Quadruped will move right.
  5. Otherwise, the Quadruped will be in the home position.

Add this code in

def runQuarky(predicted_class):
    if pose.ishanddetected():
      if predicted_class == "Forward":
        quad.move("forward",1000,1)
        
      elif predicted_class == "Backward":
        quad.move("backward",1000,1)
        
      elif predicted_class == "Left":
        quad.move("turn left",1000,1)
        
      elif predicted_class == "Right":
        quad.move("turn right",1000,1)
        
      elif predicted_class == "Stop":
        quad.home()

Final code

####################imports####################
# Do not change

import numpy as np
import tensorflow as tf
import time
quarky=Quarky

quad=Quadruped(4,1,8,5,3,2,7,6)

# Do not change
####################imports####################

#Following are the model and video capture configurations
# Do not change

model=tf.keras.models.load_model(
    "num_model.h5",
    custom_objects=None,
    compile=True,
    options=None)
pose = Posenet()                                                    # Initializing Posenet
pose.enablebox()                                                    # Enabling video capture box
pose.video("on",0)                                                  # Taking video input
class_list=['Forward','Backward','Left','Right','Stop']                  # List of all the classes

def runQuarky(predicted_class):
    if pose.ishanddetected():
      if predicted_class == "Forward":
        quad.move("forward",1000,1)
        
      elif predicted_class == "Backward":
        quad.move("backward",1000,1)
        
      elif predicted_class == "Left":
        quad.move("turn left",1000,1)
        
      elif predicted_class == "Right":
        quad.move("turn right",1000,1)
        
      elif predicted_class == "Stop":
        quad.home()
# Do not change
###############################################

#This is the while loop block, computations happen here
# Do not change

while True:
  pose.analysehand()                                             # Using Posenet to analyse hand pose
  coordinate_xy=[]
    
    # for loop to iterate through 21 points of recognition
  for i in range(21):
    if(pose.gethandposition(1,i,0)!="NULL"  or pose.gethandposition(2,i,0)!="NULL"):
      coordinate_xy.append(int(240+float(pose.gethandposition(1,i,0))))
      coordinate_xy.append(int(180-float(pose.gethandposition(2,i,0))))
    else:
      coordinate_xy.append(0)
      coordinate_xy.append(0)
            
  coordinate_xy_tensor = tf.expand_dims(coordinate_xy, 0)        # Expanding the dimension of the coordinate list
  predict=model.predict(coordinate_xy_tensor)                    # Making an initial prediction using the model
  predict_index=np.argmax(predict[0], axis=0)                    # Generating index out of the prediction
  predicted_class=class_list[predict_index]                      # Tallying the index with class list
  print(predicted_class)
  runQuarky(predicted_class)
    
  # Do not change

Output

Read More
The Language Translator with ChatGPT is a powerful system that enables real-time translation and conversation support, facilitating multilingual communication.

Introduction

The Language Translator with ChatGPT and Speech Recognition is a system that helps people communicate across languages by providing real-time translation and conversation support. It combines language translation, chatbot capabilities, and speech recognition to facilitate multilingual communication.

Language Translator Using ChatGPT is a project that trains the ChatGPT language model with multilingual data to enable it to understand and translate text between different languages. It utilizes ChatGPT’s natural language processing abilities to generate human-like responses, making it ideal for building a language translation system. The training data includes sentence pairs in different languages and their corresponding translations.

Logic

Initially, two characters engage in a conversation. One character asks a question, and the other character converts it into a different language before answering it and providing a response.

  1. Open PictoBlox and create a new file.
  2. Select the environment as appropriate Block Coding Environment.
  3. To add the ChatGPT extension, click on the extension button located as shown in the image. This will enable the ChatGPT extension, allowing you to incorporate its capabilities into your project.
  4. To begin, select two sprites Hazel and John from the sprite options in the bottom left corner, as shown in the image.
  5. To upload a backdrop, use the “Choose Backdrop” option, which allows you to select and set a background image or scene for your activity. Backdrops are used as the type of background in our activity.
  6. To create a script, select a different sprite and then add various block scripts to customize its behavior.
  7. Let’s use the sprites Hazel and Joan for our script.
  8. For Hazel, navigate to the costumes section and enable the “Flip Horizontal” option to add a mirror effect. Set the positions of both sprites as if they are talking to each other.
  9. Click on John’s Sprite first. We will now begin writing a script as shown in the image.
  10. First, we will prompt the user to input a sentence. and using the say() method, the sprite will verbally repeat the same answer provided by the user.
  11. Next, by using the broadcast() block, we can send the translated answer to the second sprite to ensure that both sprites have the same response.
  12. select the second Hazel’s sprite and begin with the when I receive a () block. This block will initiate the action when the second sprite receives a message from the first sprite.
  13. Drag and drop the “translate() into()” function into the block. This block will translate any language of your choice. In this example, we are writing in Hindi language.
  14. We utilize the get AI Response block to obtain a response from ChatGPT. Then, using the say() method, Hazel, the sprite, will deliver the answer in translated sentences.
  15. To begin the script, simply click on the green flag button.

Output

Read More
Learn about noun detectors, tools or algorithms designed to identify and extract nouns from text or speech inputs.

Introduction

A noun detector is a tool or algorithm designed to identify and extract nouns from a given text or speech input. Nouns are a type of word that typically represent people, places, things, or ideas. In the context of chat-based applications, a noun detector can be useful for extracting key information or identifying specific entities mentioned in a conversation. It can help in tasks such as named entity recognition, information retrieval, sentiment analysis, and many more.
A noun detector serves as a valuable component in language processing systems, helping to extract and utilize meaningful information from text or speech inputs in chat-based interactions.

Logic

First, ChatGPT generates random sentences, and we save this response in a variable. Then, it asks users to identify a noun from the given sentence. If the user’s answer matches the response generated by ChatGPT, it will say “Correct.” Otherwise, it will say “Incorrect answer.”

  1. Open PictoBlox and create a new file.
  2. Select the environment as appropriate Block Coding Environment.
  3. To add the ChatGPT extension, click on the extension button located as shown in the image. This will enable the ChatGPT extension, allowing you to incorporate its capabilities into your project.
  4. We drag and drop the “Ask (AI)” block from the ChatExtension, and we use it to ask for any random sentence as input from chatGPT.
  5. We create a new variable called sentence and assign the value of a random sentence generated by ChatGPT to it.
  6. Use the say() method to provide instructions for finding nouns in the given sentence.
  7. Drag and drop the get() from the () block from the ChatGPT extension to obtain information from the sentence.
  8. If we use an if-else loop, we prompt the user to identify a noun from a given sentence. If the user’s answer matches the response generated by ChatGPT, it will say Correct answer for 2 minutes.
  9. Otherwise, if the user’s answer does not match the response from ChatGPT, it will return Answer is not a noun for 2 seconds.
  10. To begin the script, simply click on the green flag button.

Code

Output

Read More
Discover an interactive way to get word definitions using ChatGPT and text-to-speech. Prompt users to choose a definition, generate it with ChatGPT, and have the sprite speak it out using the text-to-speech extension.

Logic

We ask the user which definition they want, and based on their input, ChatGPT generates the definition of the particular word. The sprite then uses the text-to-speech extension to speak out the definition.

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Choose a suitable coding environment for block-based coding.
  3. Add the text-to-speech and ChatGPT extensions to your project from the extension palette located at the bottom right corner of PictoBlox.
  4. We use the Set Maximum Length to () block to specify the maximum length in ChatGPT, which represents the maximum number of words allowed in a single message.
  5. We prompt users to indicate which definition they would like to receive.
  6. We pass this input to the define() block, which sends a request to ChatGPT to define the user answer.
  7. We utilize the getAI block to retrieve the most recent response from ChatGPT. Then, we use the say() method to have the sprite speak out this response.
  8. Finally, we utilize the speak() block from the text-to-speech extension to have the given text spoken aloud.

Code

Output

Read More
Discover an interactive way to get word definitions using ChatGPT and text-to-speech. Prompt users to choose a definition, generate it with ChatGPT, and have the sprite speak it out using the text-to-speech extension.

Introduction

We ask the user which definition they want, and based on their input, ChatGPT generates the definition of the particular word. The sprite then uses the text-to-speech extension to speak out the definition.

Logic

The code represents a conversation between the sprite character “Tobi” and the AI models. The sprite asks the user for a definition, the user responds, the AI generates a result based on the response, and the sprite says and speaks the result.

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Choose a suitable coding environment for python-based coding.
  3. Add the text-to-speech and ChatGPT extensions to your project from the extension palette located at the bottom right corner of PictoBlox.
  4. We create an click on of the Text to Speech extension. This class allows us to convert text into spoken audio.
  5. Next, we create an click on of the ChatGPT extension. ChatGPT is a language model that can generate human-like text responses based on the input it receives.
  6. Define a sprite Tobi.
  7. Create an instance of the ChatGPT AI model. ChatGPT is likely a class or module designed for generating responses to user inputs or questions.
  8. Furthermore, we create an instance of the TexttoSpeech model using  TexttoSpeech(). It seems to be a text-to-speech synthesis system, allowing the AI to convert generated text into spoken words.
  9. The sprite will prompt the user to provide a specific definition.
  10. The user’s response to the sprite’s question is stored in the variable “l” after converting it to a string using the str() function.
  11. Then calls a method named askdefinition() on the gpt object, passing the user’s input l as an argument. It seems that this method is responsible for retrieving a definition based on the user’s input.
  12. Furthermore, calls a method named chatGPTresult() on the gpt object, which likely generates a response or result based on the provided definition. The result is stored in the variable result.
  13. Now, we instruct the sprite Tobi to show the result byusing the say() method.
  14. Finally we call the speak() method on the speech object, passing the result as an argument. This is likely responsible for synthesizing the text into speech and playing it back.
  15. Press the Run button to run the code.

Code

sprite = Sprite('Tobi')
gpt = ChatGPT()
speech = TexttoSpeech()

sprite.input("Which definition do you want?")
l = str(sprite.answer())

data=gpt.askdefination(l)

result=gpt.chatGPTresult()
sprite.say(result,10)
speech.speak(result)

Output

Read More
All articles loaded
No more articles to load
Table of Contents
[PythonExtension]