Deprecated: Required parameter $query follows optional parameter $post in /var/www/html/wp-content/plugins/elementor-extras/modules/breadcrumbs/widgets/breadcrumbs.php on line 1215
Set PID Line Following Parameter - PictoBlox Block | Quarky Advance Line Following
Table of Contents
[BlocksExtension]

Warning: Undefined array key "pp_wrapper_link" in /var/www/html/wp-content/plugins/powerpack-elements/extensions/wrapper-link.php on line 194

Warning: Undefined array key "pp_wrapper_link_enable" in /var/www/html/wp-content/plugins/powerpack-elements/extensions/wrapper-link.php on line 196

Warning: Undefined array key "pp_custom_cursor_icon" in /var/www/html/wp-content/plugins/powerpack-elements/extensions/custom-cursor.php on line 350

Warning: Undefined array key "pp_custom_cursor_text" in /var/www/html/wp-content/plugins/powerpack-elements/extensions/custom-cursor.php on line 351

Warning: Undefined array key "pp_custom_cursor_target" in /var/www/html/wp-content/plugins/powerpack-elements/extensions/custom-cursor.php on line 352

Warning: Undefined array key "pp_custom_cursor_css_selector" in /var/www/html/wp-content/plugins/powerpack-elements/extensions/custom-cursor.php on line 353

Warning: Undefined array key "pp_custom_cursor_enable" in /var/www/html/wp-content/plugins/powerpack-elements/extensions/custom-cursor.php on line 355

Set PID Line Following Parameter

Description

Proportional, Integral, and Derivative (PID) functions are used to improve the feedback (analog inputs from the IR Sensors) to improve the robot’s movement.

1. Proportional (P) :  
This component enables the robot to make immediate, precise adjustments based on its distance from the line. If the robot drifts too far to the right, it will steer left, and if it veers too far to the left, it will steer right to correct its course.

2. Integral (I) : 
The integral component monitors how long the robot has been off the line. If the deviation persists for an extended time, this part applies a larger correction to bring the robot back on track more effectively.

3. Derivative (D) :
The derivative component anticipates future errors by analyzing the rate at which the robot drifts from the line. This ensures smooth, predictive adjustments, helping to prevent overshooting and maintain stability.

The control output is given by:
Control Output = Proportional + Integral + Derivative

So, the PID controller combines these three components to continuously adjust the robot’s movements, keeping it as close to the line as possible while moving smoothly and fast. A robot is always learning and fine-tuning its path to stay on track.
Think of it as a smart system that balances and corrects itself as it moves, ensuring it follows the line accurately. PID line followers are commonly used in robotics competitions and educational settings to teach about control systems and automation.

Note: If you initialize a Three IR line following, use Kp (propositional constant) >= 7 for a better result.

How does PID work in the Quarky Line Following?
In Quarky’s “Do Line Following” Block/Python function, the system employs its two Infrared (IR) sensors, one on the left and another on the right, to navigate along a line.
Let’s say the analog values registered by these sensors
on a white surface are
Left = 150
Right = 170

while on the black line, they read
Left = 820
Right = 750.

When the robot is shifted to the right, the left sensor is on the black line and the right one is on the white. At this point the reading of IR Sensors is
Left =  820
Right = 170,
The error is calculated as follows:

Error = (Left Sensor Value – Right Sensor Value)/10
= (820 – 170)/10
= 65

Proportional Only
For the PID (Proportional-Integral-Derivative) controller, the constants are set as follows:
Kp = 0.5, Ki = 0, and  Kd = 0.

The proportional term is then calculated.
Proportional = K_p *Error
= 0.5* 65
= 32 (approximately)

The control output is given by:
Control Output = Proportional + Integral + Derivative

In this case, both the integral and derivative terms are set to 0, so the control output simplifies to:
Control Output = 32 + 0 + 0
= 32

Subsequently, the motor speeds are adjusted based on the control output and the motor speed parameters.

Assuming base speed = 40, minimum speed = 0, and maximum speed = 80
The Left and Right motor speeds are computed as follows:

Left Motor Speed = Base Speed – Control Output
= 40 – 32
= 8

Right Motor Speed = Base Speed + Control Output
= 40 + 32
= 72

Proportional and Integral only
The integral term in the PID (Proportional-Integral-Derivative) controller addresses the accumulated past errors over time. If there has been a persistent error over time, the integral term gradually increases, helping to eliminate the accumulated error.

In a scenario where the robot is stuck and the robot’s tires are slipping due to the excessive weight of the robot and low battery, both sensors are placed on contrasting surfaces – the Left sensor on black and the Right sensor on white. Despite the challenging conditions, the proportional constant (Kp) comes into play, initially adjusting the motor speeds. As a result, the Left motor speed becomes 8, and the Right motor speed reaches 72.

When an integral term with a constant value (Ki = 0.01) is introduced into the control equation, the integral value is updated in each iteration:

I = I + error
I = 0 + 65 = 65

The control output, comprising proportional, integral, and derivative terms, is then computed as follows:
Control Output  = Proportional +Integral + Derivative
= (0.5 * 65) + (0.01 * 65) + 0
= 32.65
Left Motor Speed = 7.35
Right Motor Speed = 72.65

In the subsequent loop, the integral term is updated again:
I = I + error
I = 65 + 65 = 130

And the control output in the next iteration becomes:
Control Output = (0.5 * 65) + (0.01 * 130) + 0
= 33.3
Left Motor Speed = 6.7
Right Motor Speed = 73.3

Proportional, Integral, and Derivative
Derivative (D): The derivative term in the PID (Proportional-Integral-Derivative) controller anticipates future errors by assessing how fast the error is changing. It plays a crucial role in preventing overshooting or oscillations by slowing down the control action as the system approaches the setpoint.

In a small arena, when a robot needs to execute a significant turn or a U-turn, increasing the proportional constant (Kp) may result in excessive oscillations. To address this, the derivative term is introduced to control oscillations and overshooting.

Consider a scenario where the error is increased:
Error = (Left Sensor Value – Right Sensor Value)/10
=(930 – 170)/10
= 76

The derivative term is computed as:
D = Previous Error – Error
= 65 – 76
= -11

The control output, including proportional, integral, and derivative terms, is then calculated as follows:
Kp = 0.5, Ki = 0.01, and  Kd = 0.2

Control Output = Proportional + Integral + Derivative
= (0.5 * 76) + (0.01 * 76) + (0.2 * -11)
= 38 + 0.76 – 2.2 = 36.56

Subsequently, the left and right motor speeds are adjusted:
Left Motor Speed = 3.44
Right Motor Speed = 76.56

In the subsequent loop, the integral term is updated, and the process repeats with a new error:
Error =  (600 – 170)/10
= 43

The integral term is updated:
I = I + Error
= 76 + 43
= 109

The derivative term for the new error is computed:
D = Previous Error – Error
= 76 – 43
= 33

The control output remains the same in this loop:
Control Output  = (0.5 * 43) + (0.01 * 109) + (0.2 * 33)
= 21.5 + 1.09 + 6.6
=29.19

Left Motor Speed = 10.81
Right Motor Speed = 69.19

This iterative process continues, with the derivative term helping to manage the robot’s response to changing errors, ultimately enhancing its stability.

 

Example

The project detects the number of faces detected on the stage.

Code

sprite = Sprite('Tobi')

fd = FaceDetection()

# Enable Bounding Box on the stage
fd.enablebox()

# Set Theshold of the stage
fd.setthreshold(0.9)

fd.analysestage()

sprite.say(str(fd.count()) + " Faces Detected")

Output

Read More
The example demonstrates how face recognition works with analysis on the stage.

The example demonstrates the application of face recognition with stage. Following are the key steps happening:

  1. Initializing the program with parameters for the sprite and face detection library.
  2. Saving Chris’s face as class 1.
  3. Saving Robert’s face as class 2.
  4. Running face recognition and placing the square box sprite on the faces of Chris and Robert.

Code

sprite = Sprite('Square Box')
fd = FaceDetection()
import time

fd.setthreshold(0.5)
fd.enablebox()

# Reset Database
fd.deleteallclass()

# Adding Chirs face to database
sprite.switchbackdrop("Chris")
time.sleep(0.5)
fd.addclassfromstage(1, "Chris")

# Adding Robert face to database
sprite.switchbackdrop("Robert")
time.sleep(0.5)
fd.addclassfromstage(2, "Robert")

sprite.switchbackdrop("Robert and Chris")

while True:
  fd.recognisefromstage()
  
  print(fd.count())
  for i in range(fd.count()):
    sprite.setx(fd.x(i+1))
    sprite.sety(fd.y(i+1))
    sprite.setsize(fd.width(i+1))
    sprite.say(getclassname(i+1))
    time.sleep(1)

Result

Read More
The example demonstrates how face recognition works with analysis on the camera.

The example demonstrates the application of face recognition with camera feed. Following are the key steps happening:

  1. Initializing the program with parameters for the sprite and face detection library.
  2. Saving the face showing in the camera as class 1.
  3. Running face recognition and reporting whether class 1 is detected or not.

Code

sprite = Sprite('Tobi')

fd = FaceDetection()
import time

fd.setthreshold(0.5)
fd.video("on", 0)
fd.enablebox()
time.sleep(2)

fd.deleteallclass()

# Adding face 1 to database
fd.addclassfromstage(1, "Face 1")

while True:
  fd.recognisefromcamera()
  
  if fd.isclassdetected(1):
    sprite.say("Face 1 Recognised")
  else:
    sprite.say("Face 1 Missing")

Output

Read More
The example demonstrates the application of face detection with a stage feed.

The example demonstrates the application of face detection with a stage feed. Following are the key steps happening:

  1. Initializing the program with parameters for the sprite and face detection library.
  2. Running face detection
  3. Running the loop to show every face and its expression

Code

sprite = Sprite('Square Box')
import time
fd = FaceDetection()

# Disable Bounding Box on the stage
fd.disablebox()

# Set Theshold of the stage
fd.setthreshold(0.4)

fd.analysestage()

print(fd.count())

for i in range(fd.count()):
  sprite.setx(fd.x(i + 1))
  sprite.sety(fd.y(i + 1))
  sprite.setsize(fd.width(i + 1))
  sprite.say("Face " + str(i + 1) + ": " + fd.expression(i + 1))
  time.sleep(1)

Output

 

Read More
The example demonstrates how to use face landmarks in the projects.

The example demonstrates how to use face landmarks in the projects. Following are the key steps happening:

  1. Initializing the program with parameters for the sprite, pen, and face detection library.
  2. Running face detection.
  3. Running the loop to show every landmark on the face.

Code

sprite = Sprite('Ball')
fd = FaceDetection()
import time
pen = Pen()

pen.clear()
sprite.setsize(10)

fd.enablebox()

fd.analysestage()

for i in range(68):
  sprite.setx(fd.landmarksx(1, i+1))
  sprite.sety(fd.landmarksy(1, i+1))
  pen.stamp()
  time.sleep(0.2)

Output

 

Read More
The example demonstrates how to use face detection with a camera feed.

The example demonstrates how to use face detection with a camera feed. Following are the key steps happening:

  1. Initializing the program with parameters for the sprite, and face detection library.
  2. Running face detection.
  3. Running the loop to show every face and expression.

Code

sprite = Sprite('Square Box')
import time
fd = FaceDetection()

fd.video("on", 0)

# Enable Bounding Box on the stage
fd.enablebox()

# Set Theshold of the stage
fd.setthreshold(0.5)

while True:
  fd.analysestage()

  for i in range(fd.count()):
    sprite.setx(fd.x(i + 1))
    sprite.sety(fd.y(i + 1))
    sprite.setsize(fd.width(i + 1))
    sprite.say(fd.expression(i + 1))

Output

Read More
Beating-Heart (4)
The project shows how to create custom patterns on Quarky RGB LED in Stage Mode.

Beating-Heart (4)

Code

sprite = Sprite('Tobi')
quarky = Quarky()

import time

while True:
	quarky.drawpattern("jjbjbjjjbbbbbjjbbbbbjjjbbbjjjjjbjjj")
	time.sleep(0.4)
	quarky.drawpattern("jjjjjjjjjbjbjjjjbbbjjjjjbjjjjjjjjjj")
	time.sleep(0.4)
Read More
The project shows how to create custom patterns on Quarky RGB LED in Upload Mode.

Beating-Heart (4)

Code

from quarky import *
import time

while True:
	quarky.drawpattern("jjbjbjjjbbbbbjjbbbbbjjjbbbjjjjjbjjj")
	time.sleep(1)
	quarky.drawpattern("jjjjjjjjjbjbjjjjbbbjjjjjbjjjjjjjjjj")
	time.sleep(1)
Read More
The project shows how to create custom patterns on Quarky RGB LED in Upload Mode.

Beating-Heart (4)

Script

Python Code Generated

# This python code is generated by PictoBlox

from quarky import *
# This python code is generated by PictoBlox

# imported modules
import time



while True:
	quarky.drawpattern("jjbjbjjjbbbbbjjbbbbbjjjbbbjjjjjbjjj")
	time.sleep(1)
	quarky.drawpattern("jjjjjjjjjbjbjjjjbbbjjjjjbjjjjjjjjjj")
	time.sleep(1)
	
	
Read More
Beating-Heart
The project shows how to create custom patterns on Quarky RGB LED in Upload Mode.

Beating-Heart (4)

Script

Read More
The project makes the Quarky display the expression according to the expression identified from the Face Recognition.

Script

Read More
The example shows how to run image classification in Python on a webcam feed using OpenCV.

Image Classification Model

Code

####################imports####################
#do not change

import cv2
import numpy as np
import tensorflow as tf

sprite = Sprite("Tobi")

#do not change
####################imports####################

#Following are the model and video capture configurations
#do not change

model = tf.keras.models.load_model('saved_model.h5',
                                   custom_objects=None,
                                   compile=True,
                                   options=None)

cap = cv2.VideoCapture(0)  # Using device's camera to capture video
text_color = (206, 235, 135)
org = (50, 50)
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
thickness = 3

class_list = ['Mask Off', 'Mask On', 'Mask Wrong']  # List of all the classes

#do not change
###############################################


def checkmask(predicted_class):
  if predicted_class == 'Mask On':
    sprite.say("Thank you for wearing the mask")
  elif predicted_class == 'Mask Off':
    sprite.say("Please wear a mask")
  else:
    sprite.say("Please wear the mask propertly")


#This is the while loop block, computations happen here

while True:
  ret, image_np = cap.read()  # Reading the captured images
  image_np = cv2.flip(image_np, 1)
  image_resized = cv2.resize(image_np, (224, 224))
  img_array = tf.expand_dims(image_resized,
                             0)  # Expanding the image array dimensions
  predict = model.predict(img_array)  # Making an initial model prediction
  predict_index = np.argmax(predict[0],
                            axis=0)  # Generating index out of the prediction
  predicted_class = class_list[
      predict_index]  # Tallying the index with class list

  image_np = cv2.putText(
      image_np, "Image Classification Output: " + str(predicted_class), org,
      font, fontScale, text_color, thickness, cv2.LINE_AA)

  print(predict)
  cv2.imshow("Image Classification Window",
             image_np)  # Displaying the classification window
  checkmask(predicted_class)

  if cv2.waitKey(25) & 0xFF == ord(
      'q'):  # Press 'q' to close the classification window
    break

cap.release()  # Stops taking video input
cv2.destroyAllWindows()  #Closes input window
Read More
The example shows how to run image classification in Python on an image file using OpenCV.

Image Classification Model

 

Code

####################imports####################
#do not change

import cv2
import numpy as np
import tensorflow as tf

#do not change
####################imports####################

#Following are the model and video capture configurations
#do not change

model = tf.keras.models.load_model('saved_model.h5',
                                   custom_objects=None,
                                   compile=True,
                                   options=None)

text_color = (206, 235, 135)
org = (50, 50)
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 0.5
thickness = 1

class_list = ['Bacteria', 'Normal', 'Virus']  # List of all the classes

#do not change
###############################################

image_np = cv2.imread("test.jpg", cv2.IMREAD_COLOR)
image_resized = cv2.resize(image_np, (224, 224))
img_array = tf.expand_dims(image_resized,
                           0)  # Expanding the image array dimensions
predict = model.predict(img_array)  # Making an initial model prediction
predict_index = np.argmax(predict[0],
                          axis=0)  # Generating index out of the prediction
predicted_class = class_list[
    predict_index]  # Tallying the index with class list

image_np = cv2.putText(image_np,
                       "Image Classification Output: " + str(predicted_class),
                       org, font, fontScale, text_color, thickness,
                       cv2.LINE_AA)

print(predict)
cv2.imshow("Image Classification Window",
           image_np)  # Displaying the classification window

cv2.imwrite("TestResult.jpg", image_np)
cv2.waitKey(0)
cv2.destroyAllWindows()
Read More
The example shows how to run image classification in Block Coding.

Script

Read More
The example demonstrates how to use the confidence threshold in face detection (Block Coding).

Script

Output

Read More
The example displays how to detect expression using face detection and mimic the expression on Quarky. The expression is detected by the camera.

Script

Output

Read More
The example shows how to create a face filter with Face Detection. It also includes how to make the filter tilt with face angles.

Script

Exmaple

Read More
Face Landmarks
The example demonstrates how to use face landmarks in the projects.

The example demonstrates how to use face landmarks in the projects. Following are the key steps happening:

  1. Initializing the program with parameters for the sprite, pen, and face detection library.
  2. Running face detection.
  3. Running the loop to show every landmark on the face.

Script

Output

Face Landmarks

Read More
The example demonstrates how face recognition works with analysis on the camera.

The example demonstrates the application of face recognition with a camera feed. Following are the key steps happening:

  1. Initializing the program with parameters for the sprite and face detection library.
  2. Saving the face showing in the camera as class 1.
  3. Running face recognition and reporting whether class 1 is detected or not.

Script

Output

Read More
The example demonstrates how face recognition works with analysis on the stage.

The example demonstrates the application of face recognition with stage. Following are the key steps happening:

  1. Initializing the program with parameters for the sprite and face detection library.
  2. Saving Chris’s face as class 1.
  3. Saving Robert’s face as class 2.
  4. Running face recognition and placing the square box sprite on the faces of Chris and Robert.

Script

Output

Read More
The example demonstrates the use of clone and gliding function in Sprite. 

The example demonstrates the use of clone and gliding function in Sprite:

  1. Whenever the sprite is clicked, a clone is created.
  2. When a clone is created its position is set to a random position on the top of the stage and then it glides down to the bottom.
  3. When it reaches the bottom, the clone is deleted.

Script

Output

Read More
The example demonstrates how to make the sprite glide to a random position on the stage when it is clicked.

Script

Output

Read More
The example demonstrates how to use stamping and the mouse location sensing in Block coding.

Script

Output

Read More
The example demonstrates how to use keys sensing to control the movement of the sprite.

Script

Output

Read More
The example demonstrates the wall bouncing of the sprite and rotation style.

Script

 

Output

Read More
The example demonstrates how to make the sprite follow the mouse.

Script

Output

Read More
The example demonstrates how to add gravity into the project on a bouncing ball.

Script

  1. Main Script to change the speed and position parameters of the ball.
  2. Custom function to initialize the ball position and speed with random variables.
  3. Custom function to check the boundary conditions and set the rules.

Output

Read More
The example demonstrates how to implement mouse tracking.

Script

Output

Read More
The example demonstrates how to add movement to a sprite using the key detection hat block.

Script

Output

Read More
The example demonstrates the sprite direction in PictoBlox.

Script

Output

Read More
All articles loaded
No more articles to load
[PictoBloxExtension]