Deprecated: Required parameter $query follows optional parameter $post in /var/www/html/wp-content/plugins/elementor-extras/modules/breadcrumbs/widgets/breadcrumbs.php on line 1215
QR Code Scanner - Blocks, Python Functions, Projects | PictoBlox Extension
[PictoBloxExtension]

QR Code Scanner

QR scanner
Extension Description
Detect, identify and read QR codes from images.

Introduction

What is QR Code?

QR Code is a machine-scannable image that can be instantly read, using a smartphone camera. Every QR code consists of a number of black squares and dots that represent some encoded piece of information; like alphabets, numbers, etc. When your Smartphone scans this code, it translates that encoded information that only computers can understand, into something that can be easily understood by humans.

 

The QR Code Scanner extension allows users to scan QR codes from the camera or stage and report the information:

  1. QR Code Data
  2. QR code position on the stage
  3. QR code angle alignment on the stage

Accessing QR Code Scanner in Block Coding

Following is the process to add QR Code Scanner capability to the PictoBlox Project.

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as Block Coding.
  3. Next, click on the Add Extension button and add the QR Code Scanner extension.
  4. You can find the QR Code Scanner blocks available in the project.

Accessing QR Code Scanner  in Python Coding

Following is the process to add QR Code Scanner capability to the PictoBlox Project.

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as Python Coding.
  3. Next, click on the Add Modules/Libraries button and add the QR Code Scanner extension.
  4. To access the library functions, you have to add the object declaration.
    qr = QRCodeScanner()

Example Project Video

In this project, a QR code is generated with the help of a QR code generator website. credits – https://www.qr-code-generator.com/

The QR code is linked to a URL. Then, the QR code is scanned with the help of the QR code scanner extension of Pictoblox. The project is created by RS Junction.

Read More

PictoBlox Blocks

The block performs the selected action for the quadruped. The action runs for the specified times and at the specified speed.
The block reports the current time.
The block performs the selected motion for the humanoid. The motion runs for the specified times and at the specified speed.
The block stops all the motors of the robot.
The Block makes a request to ChatGPT to define the text specified in it. The response of ChatGPT is then stored in PictoBlox and can be accessed using the get AI response block.
This block sets the output of a selected PWM pin of an Arduino Uno, Arduino Mega, or Arduino Nano board to a value from 0 to 255. When set to 128, the output will be high for half the time, and low for the other half. This allows users to control the voltage output to an attached device.
This block writes a specific text, such as “Hello, World!”, onto an LCD display. It is useful for creating simple text-based user interfaces for electronic projects or devices.
This block resets all the servo motors of the robotic arm to their default angle which is commonly referred to as the ‘home’ position.
This block allows you to adjust the robot’s turning speed.
Starts the script whnever a message of specific color is recieved.
Moves the sprite a specified number of grid squares down.
Increases the sprite’s size.
Stop all the sprites’s scripts.
This block enables setting the instrument for the upcoming musical note.
This clears all the input from stage such as pen and stamp.
After connection is established, rotates the quarky a specified number of step to the right .
Shows a specified static emotion on the quarky LED display.
Detects and identifies the facial expression within a view captured by camera
Detects and identifies signs made by hand within a view captured by camera.
After connection is established, moves the wizbot a specified number of step Back.
After connection is established, rotates the wizbot a specified degree of angle to the left.
This block is used to write text on evive’s TFT display.
evive has two potentiometers whose analog outputs can be varied by turning the knob clockwise or anti-clockwise. This block returns the analog output of either of the potentiometer (from 0-1023).
There are 10 digital buttons in the gamepad module, whose data is sent to the device when they are pressed or released. The block reports whether the button is currently pressed on the gamepad or not. If the chosen button is pressed, then it returns true, else it returns false. 
The block set the relay connected to the specified digital pin to ON or OFF.
This block is used to set the angles at which the gripper of the robotic arm opens and closes. You need to use this block every time, you open or close the gripper as this block defines at which angles the gripper claw is opened and at which it is closed.
This block should be included every time you work with the humanoid robot for the first time as it calibrates the angles of all the four servo motors of the arm(2 servos of shoulder + 2 servos of hands) and saves it in the memory of evive.
The block points its sprite towards the mouse-pointer or another sprite depending on its costume center; this changes the sprite’s direction and rotates the sprite.
The block changes its sprite’s costume to a specified one.
All articles loaded
No more articles to load

Block Coding Examples

All articles loaded
No more articles to load

Python Functions

The function enables the automatic display of the landmark on pose/hand detected on the stage.
Syntax: enablebox()
The function disables the automatic display of the landmark on pose/hand detected on the stage.
Syntax: disablebox()
This function is used to analyze the image received as input from the stage, for human pose detection.
Syntax: analysestage()
This function returns the x position of the pose landmark detected. The position is mapped with the stage coordinates.
Syntax: x(landmark_number = 1, pose_number = 1)
This function returns the y position of the pose landmark detected. The position is mapped with the stage coordinates.
Syntax: y(landmark_number = 1, pose_number = 1)
The function tells whether the human pose is detected or not.
Syntax: isdetected(landmark_number = 1, pose_number = 1)
This function is used to analyze the image received as input from the camera, for human hand detection.
Syntax: analysehand()
The function tells whether the human hand is detected or not.
Syntax: ishanddetected()
This function returns the specified parameter of the hand landmark detected.
Syntax: gethandposition(parameter = 1, landmark_number = 4)
This function returns the x position of the hand detected. The position is mapped with the stage coordinates.
Syntax: handx()
This function returns the y position of the hand detected. The position is mapped with the stage coordinates.
Syntax: handy()
The function adds the specified text data to the specified class.
Syntax: pushdata(text_data = “your text”, class_label = “class”)
The function trains the NLP model with the data added with pushdata() function.
Syntax: train()
The function resets and clears the NLP model.
Syntax: reset()
The function analyses the specified test and provides the class name under which it has been classified by the NLP model.
Syntax: analyse(text = “your text”)
The function is used to control the state of the camera.
Syntax: video(video_state = “on”, transparency = 1)
The function enables the automatic display of the box on object detection on the stage.
Syntax: enablebox()
The function disables the automatic display of the box on object detection on the stage.
Syntax: disablebox()
This function is used to set the threshold for the confidence (accuracy) of object detection, 0 being low confidence and 1 being high confidence.
Syntax: setthreshold(threshold = 0.5)
This function is used to analyze the image received as input from the camera, for objects.
Syntax: analysecamera()
This function is used to analyze the image received as input from the stage, for objects.
Syntax: analysestage()
This function returns the total number of objects detected in the camera feed or the stage.
Syntax: count()
This function is used to get the class name of the analyzed object.
Syntax: classname(object = 1)
This function returns the x position of the object detected. You can specify the object for which the value is needed. The position is mapped with the stage coordinates.
Syntax: x(object = 1)
This function returns the y position of the object detected. You can specify the object for which the value is needed. The position is mapped with the stage coordinates.
Syntax: y(object = 1)
This function returns the width of the object detected. You can specify the object for which the value is needed. The position is mapped with the stage coordinates.
Syntax: width(object = 1)
This function returns the height of the object detected. You can specify the object for which the value is needed. The position is mapped with the stage coordinates.
Syntax: height(object = 1)
This function is used to get the confidence (accuracy) of object detection, 0 being low confidence and 1 being high confidence.
Syntax: confidence(object = 1)
The function returns whether the specified signal is detected in the analysis or not.
Syntax: issignaldetected(signal_name = “Go”)
The function returns the specified parameter for the specified signal detected.
Syntax: getsignaldetail(signal_name = “Go”, parameter_value = 1)
All articles loaded
No more articles to load

Python Coding Examples

All articles loaded
No more articles to load
Table of Contents