This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Uncategorized

The Development of “Mobile Bi-manual Soft Fruit Harvesting Robot”

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

The Development of “Mobile Bi-manual Soft Fruit Harvesting Robot”

 

 

 

 

A DISSERTATION PREPARED BY:

 

 

IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR OBTAINING THE DEGREE OF MASTERS OF

 

 

 

SUPERVISOR

 

 

 

DATE OF SUBMISSION

 

 

 

 

 

 

ABSTRACT

This paper describes the concept of an autonomous robot for fruit harvesting and, in this regard, soft fruit harvesting. Robotic harvesting in a cluttered and unstructured environment remains a challenge. The project aims to improve an existing  Mobile Bi-manual Soft Fruit Harvesting Robot for soft fruit harvesting and related crop intelligence to Wilkin and his son (Tiptree, Essex). The project will develop a commercially viable technology solution to promote the long-term survival and continued growth of the soft fruit industry while addressing some interconnected and pressing issues (such as “repetitive” severe labor shortages) to increase the productivity of soft fruit. Due to the meager profits involved, the surge in demand for soft fruits, high labor intensity, low skills, and rapidly growing demand on farms, it is necessary to reduce management costs and minimize the use of waste and pesticides to ensure environmental sustainability. Mobile Bi-manual Soft Fruit Harvesting Robot will bring engineering and technological innovation.

 

 

 

 

 

 

 

 

 

 

 

ACKNOWLEDGMENT

I would like to express my sincere gratitude to my supervisor for his consistent support throughout my research, for I would not have completed this work without his help. I am also deeply indebted to my family and friends who have given me the moral and financial support required to do this research. I also wish to acknowledge my department for the thoughtful guidance in this partial fulfillment of my master’s degree. May you all prosper in your endeavors.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Table of Contents

ABSTRACT 2

ACKNOWLEDGMENT 3

1.1. BACKGROUND 6

1.2. AIM 8

1.3. OBJECTIVES 8

Literature review 8

End-effector and Manipulation 8

2.2. Pick and Place: Task level robot system 10

2.3.2. Soft fruit and Human-Robot interaction in agriculture 11

3.METHODOLOGY AND OVERVIEW OF THE SYSTEM 13

Manipulation system 13

The PMP Model 14

ANN for action 15

 GNG 17

Vision system 18

ROS 19

UR3 D-H parameters and forward kinematics 20

Training ANN in MatLab: Feedforward Multilayer Perceptron Network 23

 Weights and Biases 23

Passive Motion Paradigm 24

UR3 Script 24

Universal Robots script 24

 Vision System: ZED Camera, image and depth 25

Image 26

Depth 26

Systems integration 26

System Calibration 26

  1. Testing/Analysis – Evaluation 27

 Control System 27

Artificial Neural Network 27

 Passive Motion Paradigm 27

Vision System 27

Target recognition 28

 Depth obtained 28

Integration 28

  1. Project Design, Specifications 28
  2. Testing and Evaluation 29

System control 29

Artificial Neural Network 29

Passive Motion Paradigm 29

UR3 29

Vision System 29

Target recognition 29

Depth obtained 30

Integration 30

6.Conclusions 30

Recommendations/Improvements 30

Limitations 31

Further Works 31

Collaborations: 31

Other Applications 31

REFERENCES 31

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

INTRODUCTION

Soft fruit farming has been a growing trend in the agricultural industry and has boosted a large income to a country’s economy. In 2016, according to market research company IndexBox, the global strawberry market amounted to 9.2 million tons, increasing by 5% against the previous year. Soft fruit relies heavily on human labor, especially harvesting (Xiong, Peng, Grimstad, From, & Isler, 2019). The following year the U.K. boosted production sales of up to 126,000 tons, and with  65% of the retail industry and 72% of the wholesale industry is made of strawberries, with the remainder made of other berries, the figure will triple in the next decade. A report given by Kantar shows that berries now account for 22% of all fruit sales in the U.K. and are expected to reach almost 2 billion pounds by 2020. Innovations in berry growth, the extension of the season, the development of varieties, and the subsequent increase in yield have increased yields, and the number of type 1 berries per plant has doubled from c600grs / plant within 15 years to nearly 1200grs per plant. Although this innovation continues, the industry may stagnate without the ability to fully harvest this additional production through full automation. In this case, the main goal of the Two-handed soft fruit robot is the fruit industry despite the fact that several attempts to develop a robotic solution for harvesting strawberries and many other crops, a fully viable commercial system have yet to be established (Silwal et al., 2017). One of the major challenges is that the robots need to be able to operate equally efficiently within diverse, unconstrained environments and crop variations with a variety of features (Bac, Hemming, & Van Henten, 2013; Silwal et al., 2017). A harvesting robot is generally a tightly integrated system, incorporating advanced features and functionalities from numerous fields, including Navigation, perception, motion planning, and manipulation (Lehnert, McCool, Sa, & Perez, 2018). These robots are also required to operate at high speed, with high accuracy and robustness and at a low cost, all features that are especially challenging in unstructured environments, such as the strawberry farm utilized for testing in this paper.

The purpose of the Mobile Bi-manual Soft Fruit Harvesting Robot is to develop a commercially viable solution to this problem, and the robot will be deployed into the actual production environment in the on-site production environment of Wilkin’s parent company. These farms cover 850 acres and employ more than 300 people during peak season. Therefore, the direct impact of this technology will be on the way soft fruit is harvested, which will directly benefit soft fruit growers in the U.K.

Mobile Bi-manual Soft Fruit Harvesting Robot will build and deploy a first-hand mobile two-way collaborative robot prototype that works with humans for soft fruit harvesting. Advanced technology will innovate in multiple directions, such as smart two-handed operation, active sensing and sensing, predictive analysis of production, organic agriculture, and completely eliminating packaging warehouses and related logistics.

  • 1. BACKGROUND

These past two years have been challenging for the agriculture sector in the United Kingdom, each year proving harder to find enough fruit pickers for the harvesting season, and unfortunately, it is a trend expected to continue. An increasingly mentioned solution for such a problem is the use of robots in the fields, industrial robots capable of realizing tasks such as Pick and Place of vegetables and fruits.  To some extent, robotic harvesting is a straightforward step in the increasingly automated present and has already been applied in some segments of the sector. Nevertheless, this solution has to deal with specific crops and their variants. Soft fruits, such as strawberries, are specifically complex; they are fragile, usually hidden in the foliage, and with uneven ripeness throughout the same plant. Given the nature of the task, it will be expected of the robots to work in what is denominated as a changing unstructured environment and to work in collaboration with human colleagues. The first challenge is the environment; most applications of robots are thought to be fully controlled industrial environments.

Having surroundings like this makes it extremely simple to program robots, resulting in straight specific task routine code. In a field, a robot will need a way to achieve some understanding of its surroundings since the task will change continuously. The second challenge will be the need for a precaution system for the human-robot interaction as well as some kind of consciousness and inner representation for the robot to anticipate movements and avoid possible injuries or collisions with their human counterparts. Pick, and place is a simple task for humans. We as humans can easily differentiate and classify objects, amongst them fruits. When aiming to grab a target we have the capacity, almost effortlessly, to know whether it is in our reach and the amount of force required to grip it. To perform this process, our brain calculates a series of data including the location and position of the target; the displacement, speed, joint angle, and force our fingers, hand, and arm require to pick the object and information is sent and the action is realized.

The entirety of these operations take place in the blink of an eye, inadvertent to us. For decades, we have had the intention to replicate this process for a robot, nevertheless it has proven complicated, notably for constantly changing, unstructured environments.  The main approach to programming industrial robots has always been a keen focus on inverse kinematics. This traditional approach in the Pick and Place task is a result of trying to replicate in robots the different calculations performed by the human brain.

The human brain just sees the target and automatically reaches for the object. The joint angles are processed in an immediate way by the brain as a response to localizing the target. For years trying a similar approach on industrial robots has been focused on attempting to, from a location input, realize the different joint calculations needed to determine the joint values. This has been performed in the form of kinematics, with an special focus on inverse kinematics. Inverse kinematics use as input the location, coordinate, of an object or target. The output of an inverse kinematic analysis is a set of joints for the robot to be able to reach the location. Even though widely used in robotics, inverse kinematics presents a series of problems. There is an infinite amount of solutions for a single input and choosing the best is computationally costly, not to mention the processing time and ineffectiveness given the redundant nature of robots. On the other hand, forward kinematics use as input the joint angles and relevant information of the body, robot. The result obtained is just a single location, coordinates for the end point. Having only one result eliminates the need to choose, saving in computational costs and time and allowing for an easier configuration from robot to robot.

  • 2. AIM

The overall aim of this dissertation is to design and develop a ‘Pick and Place’ solution for strawberry picking. The solution must be implemented in the UR3 robot and tested using a real strawberry plant. It should implement a vision capable of identifying the strawberry, determining the location of it, and being able to reach for it without human interaction in the process. To provide a feasible solution for strawberry picking in a modern day farm.

  • 3. OBJECTIVES

The objective of this project is to develop a Pick and Place routine for the UR3 robot, to localize a strawberry and reach for it. The goals of the dissertation are explained below:

  • Outline an integrated system capable of recognizing a strawberry through a vision system,       obtaining its location and accurately reaching to the location.
  • Generate a motor control system configurable to any robot.
  • Train a Feedforward Multilayer Perceptron Artificial Neural Network for robot control.
  • Implement Passive Motion Paradigm Control concept to control robot movement.
  • Create a proficient vision scheme to identify and provide the relative position of a specific object, a strawberry.
  • Elaborate an implementation of the system to the collaborative industrial robot UR3.

2.Literature review

  • 1. End-effector and Manipulation

In regards to this, there has been a vast development with many considering the scissor end-effector as the main tool for the purpose of fruit detachment.(Cui et al., 2013; Hayashi et al., 2014; ).  The International Organization for Standardization (ISO) defines terms associated or relevant to robots and robotic apparatus regardless of their operation in either industrial or non-industrial settings in the ISO 8373. It defines an industrial robot as an automatically controlled which controller includes teach pendant and any communication interface (hardware and software), reprogrammable, multipurpose manipulator which includes actuators, programmable in three or more axes, which can be either fixed in place or mobile for use in industrial automation applications. Industrial robots can also be defined as task level robotic systems. They are capable to perform industrial tasks, such as Pick and Place. For this project the industrial robot used was the UR3 collaborative industrial arm from Universal Robots. UR3: Universal Robots collaborative industrial arm Produced by the Universal Robots Company, the UR3 robot is the smallest industrial arm from the U.R. line. It is a small collaborative table-top robot, ideally

designed for light assembly tasks and automated workbench schemas. The result of its compact form and easy programming is the applications for the robot spanning across all types of industries. A robot with remarkable flexibility for several movements, end-effector (end point) precision and manipulation of the tool, intrinsic in its design.

Below are listed some of its main features:

  • Six articulation points
  • 11 Kg weight
  • 3 Kg (6.6lbs) payload
  • ±360o degree rotation on all wrist joints
  • Infinite rotation on the end joint
  • 500mm reach radius

 

These characteristics make the UR3 a flexible, lightweight robot designed with human collaboration in mind. Praised as an industrial arm with a very good speed and accuracy. It is created to be an exceptional assistant to employees in assembly, polish, glue, and screw applications, which require uniform product quality. Likewise, efficient working in a work station for picking, assembling and placing parts in production flows. The UR3 arm can be controlled in two ways at Graphical User Interface (GUI) level, also referred throughout this document as tablet, and at the Script level. The Script level is where URScript can be used as a programming language to control and monitor the UR3, its elements and its movements. URController is the low-level controller that runs as a daemon for the UR3. The robot connects as a client using a local TCP/IP connection, through a PolyScope User Interface.  There also exist a O.S. package for Universal robots, however it is mainly supported for UR5 and UR10.

 

  • 2. Pick and Place: Task level robot system

Pick and place is a common name for a type task-level robot system, also can be referred to as a motion class. A task-level robot system is defined as one able to be instructed in terms of task-level goals, such as: retrieve part and deposit in a specified target location. Task-level robot systems have been objective of research since 1961.  An important characteristic of task-level specifications is that they are independent of the robot performing the task.  The general pick-and-place problem can be summarized as follows: During each assembly step; choosing the grasp on the part, and planning the motion to grasp it, planning a motion to the assembly location for the part and planning another motion

to extract the gripper.

This problem is complicated as a result of the amount of degrees of freedom, the more degrees of freedom a robot, his joints, has the harder to control it.  Overtime the pick and place problem has been redefined as the need to optimize time by minimizing the placement time of the robot. Which is achieved by finding the most suitable position for the grasp ] and that adds the least time to the task. The path finding problem in an industrial robotic arm is inherently complicated, however in applications the general problem is rarely required to be solved since applications tend to deal with one less general problem.The paper presents an example of optimization for a pick and place task concerning certain limitations in the task, but it is a traditional pick and place system. Industrial robots performing this kind of tasks have been implemented on a variety of industrial sectors, such as agriculture, food processing, sampling,assembling and manufacturing, several chain production processes, transport, logistics, and packaging among many others.

  • 3. Agriculture: Context in the U.K. and its path towards automation

Currently in the U.K. an atmosphere of uncertainty is present as a consequence of the Brexit, which has started to affect the already damaged migrant worker availability and confidence for agriculture works. Other developing issues such as urban drift, improving economies, and demographics have started to affect the Agri-Food (agricultural and food processes from the farm to the stores) industries in various sectors at a global scale. These jobs are hard, physically demanding, low paid job, dull in nature due to repetitiveness, in harsh workplaces, and somewhat unrewarding.

  • 3.1. Agriculture in the United Kingdom

Even though, Agri-Food is the largest manufacturing sector in the United Kingdom it currently has a major threat. Farmers in the United Kingdom are facing difficulties to find sufficient pickers for the harvesting seasons, otherwise the produce that is not picked rots on the fields and becomes a financial loss  The endangered segments represent substantial economic worth, just the soft fruit production is worth around £1.2b. This is not a novelty, it has been a recurring concern since 2013 when the government terminated the Seasonal Agricultural Workers Scheme (SAWS). However, the situation has aggravated in recent years. Last year, alone, the National Farmers Union (NFU) reported a 12.5% lack of seasonal workers required for the agriculture sector.

This 2018, several sources , have reported a scarcity of workers for the food and farming businesses in Britain. The causes of the shortage originate in the nature of the field hand in the U.K. itself. A generation ago, field hand in U.K. farms and food companies were predominantly British. Today’s industry relies heavily on migrant labourers, approximately 65,000. Only 1% of seasonal fruit pickers are British, while the majority of them come primarily from eastern Europe, where there has been a sharp decline in available workers as well as a lack of disposition to stay within the U.K. The situation is being intensified by the Brexit referendum, added to a weak pound and stronger economies in other European countries, has resulted in an unattractive work scheme. There is a similar pattern worldwide, with migrant hand replacing native workers in the majority of the developed countries. There is an overall agreement, the shortfall on agriculture workers is not going anywhere and will worsen in the years to come.  In an attempt to attract more workers several businesses have raised their salaries, added bonuses and improve accommodation for workers, which will in turn elevate the production costs.  Strawberries price could rise as much as 30% to 50%. Another initiative is to reintroduce a scheme similar to the SAWS. Finally, implementing robots for picking crops would be a solution to lower the rising production costs and to fight the current and proximate shortfalls.

 

  • 3.2. Soft fruit and Human-Robot interaction in agriculture

While harvesting soft fruit is an automatization challenge it will also involve Soft fruit, also called berries, is hard to harvest given the nature of the produce. It is small, fragile, and usually hidden in the foliage which makes it difficult to reach. A perfect candidate for selective harvesting, that involves harvesting crops partially, only those parts that meet certain quality, in this case ripeness which is uneven throughout a same plant. More research is badly required into robotic technologies capable of operating closely to the crop, and possessing and advanced manipulation, especially with interactive or tactile properties for example for picking soft fruit without damaging it. obotics and Autonomous Systems (RAS) is a present need in the United Kingdom as discussed in “White paper – Agricultural Robotics: The Future of Robotic Agriculture [7] which was published this year. It is based on a thorough research of the matter; and it gives a general context of the needs, characteristics and opportunities in agriculture in the U.K.

The full potential of RAS in agriculture will begin to be evident when different types of robots, autonomous systems, and human workers are brought together in a systemic approach. Planning, scheduling and coordination are fundamental to the control of multi-robot systems on the fields and farm, and more generally for increasing the level of automation in the sector.  However, robots will not just appear and replace the whole industry workers. This technology is not intended to work completely by itself. Robots working in unstructured environments is a fairly recent research area. Until this point in history robots have been majorly used in industrial and controlled settings. The agriculture sector, compared to others, is a field where automatization is in its beginnings. For this reason, the technologies being introduced will be working alongside human co- workers, with human operators and supervisors in farms, crop fields, and food factories. Another cause for the technologies to be slowly introduced in a massively human controlled approach will be a consequence of the size nature of the different companies and farmers that make up the agriculture field. A minority of big companies would be the ones able to withstand the economic impact of full automation at once. Most of the agriculture businesses and farms will require technologies to be introduced in a gradual approach, compatible with their current systems and procedures. Typical operations for example in a farm are composed of various tasks, some of which are sufficiently structured to be autonomously performed by a robotic system, while many others require skills and expertise that can only be executed by a human worker. Human supervision will also be a necessary safety factor for Agri-robotic systems for the near future. In order for an easier change of the industry and integration of automation technologies is the need to clearly demonstrate economic benefits, which has always been the major driver of change to the agricultural businesses. Which is expected from these technologies, to increase productivity in large sectors of the economy with relatively low one.

2.4. Robots: Degrees of freedom, Denavit and Hartenberg, and kinematics

Robots have multiple degrees of freedom (DOF). DOF can be defined as the number of independent measurements required to define the position and orientation of an object in space at any given instant in time, so that it is determined by unique parameters. Six degrees of freedom, or pieces of information, for a robot are necessary to be able to indicate the location of the object and its desired orientation in

within its workspace, but there can be robots with less DOFs. Robots are devices, that can be considered mechanisms, with several joints which permits them to move freely within their reach zone. As such they can be considered kinematic chains, however they are not closed-loop mechanisms, as the ones dealt with by the Denavit-Hartenberg notation.

 

 

Mobile platform and Navigation

In the past years the use of mobile for a vast range of agricultural activities has increased,from basic activities such as pruning (McCool et al., 2018), to high phenotyping(Vijayarangan et al.,2017), and further transportation(Ye et al., 2017). A few of this mobile robots are create for specific tasks that is they are only created for one specific application. Some of this mobile robots can be found in literature including the sweet pepper-harvesting robot (Lehnert, English, McCool, 2017). Further these robots are used commercial and example been the pruning robots created by companies like Agrobot. Other robots can be used in multiple applications and thus are regarded as generic example Agrointelli company created Robotti for commercial purposes. Many of this robots rely on a mobiles base, that is, specifically designed for one type of environment. A mobile base designed for driving in tractor-sized tracks in open fields, for example at usual do not fit in greenhouse.

3.METHODOLOGY AND OVERVIEW OF THE SYSTEM

In this section we seek to explain the methods and ideas carried out inorder to come up with our solution.We can divide the method into two method structures. They are the manipulation system and vision system. Then I will introduce ROS–the system required by the two-handed soft fruit robot.

Manipulation system

Throughout the history of robotics, there are many operating theories. For this particular study, we will focus on two-three models, the passive motion paradigm, artificial neural networks, and growing neural gas.

  • The PMP Model

PMP model basically is created to mimic the operations of a normal human hands inclusive of the operations of the nervers,joints etc and how each part works hand in hand.  In order to explain the neural control of movement, the Equilibrium Point Hypothesis (EPH)proposes that the human body’s posture is not controlled directly and in detail by the brain, but that biomechanics achieves a balance between a large number of muscles and environmental forces. In other words, by simply allowing the intrinsic dynamics of the neuromuscular system to seek its equilibrium state when triggered by an intended target, it is indeed possible to achieve complex actions without a complex high-dimensional optimization process. A reasonable extension of EPH theory is to take into account what happens in the brain during psychological simulations and openly performing actions, reflecting an intrinsic cortical dynamics that is very similar to the physical dynamics implied by EPH, but this is achieved through EPH animation. A flexible, malleable and configurable internal representation of the body with attractor dynamics of the force field caused by the intended target. This idea appeared in the form of a passive motion paradigm (PMP) in the early 1980s and was redesigned (Mohan and Morasso, 2011; Mohan et al., 2009; Mussa-Ivaldi et al., 1988). The basic idea can be proposed qualitatively, that is when the end effector is assigned the task of reaching a spatial target point, the process by which the brain can determine the work distribution on the set of redundant joints can be expressed as an internal simulation of the human body model. This process Calculate how much each joint will move if an external induced force (ie, target) pulls the end effector toward the target a small amount. PMP model motions caused by changes in internal space as motions that cause changes in external space.

Since PMP is not computationally expensive, especially when coordinating highly redundant robots. Therefore, PMP is closely related to other methods based on active reasoning, which also avoids inverse kinematics. Second, PMP provides runtime configurability. The PMP network can be dynamically assembled based on the nature of the motion task and the body chain or tool selected for execution. The fact that PMP can not only generate real movements, but also simulate imaginary movements to predict the perceived results of imaginary movements is important. We believe that internal human models can act as links or middleware between real and imaginary actions. Running internal simulations on a set of interconnected neuron networks must be the primary function of the human body model. We believe that the proposed PMP model provides a possible calculation formula to explain the results of neuroscience. In addition, PMP reinforces the notion that cognitive motor processes (such as action plans) share the same representation as motor execution. This enables cognitive agents to think ahead and plan their behavior in the environment in a goal-oriented manner. In this sense, the PMP framework is closely related to imitating human hands. Simulations on a set of interconnected neuron networks must be the primary function of the human body model. We believe that the proposed PMP model provides a possible calculation formula to explain the results of neuroscience. In addition, PMP reinforces the notion that cognitive motor processes (such as action plans) share the same representation as motor execution. This enables cognitive agents to think ahead and plan their behavior in the environment in a goal-oriented manner. In this sense, the PMP framework is closely related to imitating human hands.

 

 

 

 

 

  • ANN for action

A standard picture of a neural network is a set of nodes (often called units, cells or neurons), and a set of weighted connections (links or synapses) between the nodes. Each node is associated with a transfer function that associates its input set (input vector or pattern) with an output or activity. Normally, activities flow along with the connections and are processed at the nodes, and the output of the nodes is passed as input (usually weighted by related synaptic weights) to the next layer of nodes. The feedforward network may use one or more intermediate hidden node processing layers to establish a connection from an external input node (usually a certain activity mode) to an output node (the classification of the input mode), and a constant bias unit for each node Provide weighted input. For an example feedforward network, see Figure 5.2.2. A recurrent network may have connections that feedback to earlier layers, so the network can retain activity over time, providing some form of memory, making the network more than a simple reactive input-output function mapping. There are many variations on this standard subject; the Elman network feeds the final layer node output as an input pattern to the input layer of the next time step (Elman, 1988); the node activity of the radial basis function network depends on the input vector Distance to the prototype vector (Bishop, 1995), while self-organizing maps (Kohonen, 1984,1997) and other associative memory models (see, for example, McClelland and Rumelhart, 1981; Hopfield, 1982), whose role is to reduce the distance between the input vector and the prototype vector. Usually, the active external pattern (N-dimensional vector) is presented to the network input layer, and then each layer of the node is processed in turn to find the network output vector in one iteration. As a result, artificial neural networks produce non-linear mappings from input vectors to output vectors, so they can be used and analyzed like many other non-linear statistical techniques (Bishop, 1995, a good introduction to the statistical properties of neural networks). The methods used to train the network to differ in different network categories, but supervised gradient descent methods (such as backpropagation) are one of the most widespread techniques. Calculate the final output mode vector over an input mode with the required output mode. The comparison of actual and expected output vectors is used to change the network to reduce this error. Training will continue until the output error on samples of known input patterns is below an acceptable level, and then the network is tested to generalize previously unseen examples. Typical applications include pattern classification, function approximation, and feature extraction.

 

 

 

The argument that ANN is suitable for robot control mainly focuses on the properties of artificial neural networks:

  1. Artificial neural networks are very flexible and can modify the architecture at many different levels, including simply changing weights or changing the entire underlying architecture.
  2. Artificial neural networks are also well-suited to incorporate mechanisms such as lifelong learning, which may enable agents to adapt to a changing environment.
  3. The network can take inputs from a variety of sources, including continuous and discrete sensor readings, and similarly produce discrete or analog motor outputs.
  4. Memory can be easily incorporated into the network by keeping activity over time, so that the activity of the output motor is a function of both the current sensor reading and the previous state of the network.
  5. ANN is generally considered to be robust to noisy input data. When sampling sensor data from non-trivial environments, inevitably this will become a big factor.
  • GNG

In humans, interpersonal space representation is essential for sensory guidance of motor behavior, which enables us to interact with objects and other people in the surrounding space. In order to allow the two arms of Mobile Bi-manual Soft Fruit Harvesting Robot to cooperate with each other without conflict, we can use the Growing neural gas (GNG) algorithm to learn from the same data used to learn the internal model of the human body (neural PMP). GNG is an unsupervised incremental network model that can learn important topological relationships in a given input vector set through simple Hebb-like learning rules. The model continues to learn, adding new neurons and connections until the performance criteria are met. The preliminary idea is that two GNG models are assigned to each of the two arms. According to the layout of the scene space perceived by the visual system, the configuration of the scene (that is, different objects in the workspace and their 3D positions) generates neural field activities in two GNG models. In other words, the neural activity in these GNG networks is an internal representation of the spatial layout of the external scene. Now, due to the corresponding reward structure imposed on the two GNG networks, each GNG network will generate the highest reward for a specific object. The object with the highest reward becomes the target, and their positions are forwarded to the PMP model, which controls the movement of the robotic arm. In the scene of actually picking strawberries, if one robotic arm is responsible for fixing the stem of the strawberry, the other robotic arm is responsible for cutting the strawberry. Then the neural field in the two GNG networks with reward dynamics chooses closer and closer targets (on the stem of the same strawberry). Therefore, although the internal model effectively assigns sub-goals to interpersonal space, conflicts may occur in the shared workspace (when the stem of a strawberry is short, a collision between two arms may occur). As mentioned above, as long as a possible collision is detected from the expected motion trajectory, the movement of the two arms needs to be rescheduled to avoid a collision. For example, serialize the movements of two arms in order to avoid conflicts.

Vision system

The Mobile Bi-manual Soft Fruit Harvesting Robot relies on the vision system to find and locate the strawberry, so as to complete the strawberry picking.As we all know, human’s powerful and advanced vision systems can obtain information about the location and other attributes of objects in the environment. Since there is a difference between the images formed in the retinas of the left and right eyes, a three-dimensional feeling (depth) is generated. In the process of image formation, the grapples of each eye are not equal because they show slight changes in the position of the observed object, which is attributed to the separation between the eyes. Artificial stereo vision systems are often inspired by biological processes and can extract three-dimensional information from digital images, which can be used to perform 3D reconstruction, tracking, and object detection.  The ZED camera is one of the newly developed stereo sensors. It has two high-resolution cameras that can simultaneously capture images (left and right) and transfer them to an external computer device for processing. It is a passive device, that is, it requires other devices to power it. According to the manufacturer’s information, it is designed and manufactured to sense the depth of objects in indoor and outdoor environments, ranging from 1 to 20 meters, at a speed of 100 FPS. The camera has a compact structure and a small size. Details of the dimensions are shown in Figure 5.2. Table 1 summarizes the most important features of ZED cameras. These features make it relatively easy to integrate them into robotic systems or drones.

<table goes here>

As described above, due to the differences observed in the images formed by the left and right retinas, humans can perceive the world in three dimensions through the eyes. During the imaging process, the images sent to the brain from each eye are different. Due to the separation between the eyes, the positions of the objects are slightly different, and they form a triangle with the scene points. Because of this difference, through triangulation, the brain can determine the distance (depth) of the object relative to the observer’s position. The realization of stereo vision in computers uses this basic principle to recreate a 3D scene representation based on two images acquired from different viewpoints.[15]

ROS

Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions designed to simplify the task of creating complex and robust robot behaviors across various robot platforms. Because it’s difficult to create truly powerful general-purpose robot software. From a robot’s perspective, trivial issues in humans usually vary greatly between task instances and environment instances. Dealing with these changes is so difficult that no one individual, laboratory or institution can wish to do it themselves. Therefore, ROS was built from scratch to encourage the development of collaborative robot software. For example, a laboratory might have experts in mapping indoor environments and provide a world-class system for creating maps. Another group may have experts using maps for Navigation, while another group may have discovered a computer vision method that can identify cluttered small objects well. ROS is specifically designed for such groups and they can collaborate and build on each other’s work.

  1. Research Methodology and process.

This chapter intends to explain the approach taken to come up with a solution fot rhe problem. The actions taken to design and construct the system are illustrated. It also includes the results obtained per area. The system is made up of distinct elements with the final section exploring the implementation of various parts.

Flowchart goes here

Fig 5 shows the concept outline for the system, it broadly illustrates the connections and elements implemented in this project.

 Artificial Neural Networks: MatLab

The tasks undertaken are briefly stated. First, a  dataset intended to be used for training of the ANN was created. Then, MatLab is used to carry out the feedforwardd network training.  Finally, the ANN is tested for accuracy in comparison with the real robot.

  • UR3 D-H parameters and forward kinematics

To train an ANN data sets are a necessity. To create a dataset there is need of mass and generation. Obtaining points directly from the robot is the key approach for data set, use a tablet, confirm the points are real and can be obtained within the workspace of the UR3 arm. This method is unrealistic in terms of quantity and time relationship, the enormous amount of time it would take and the immense amount of points needed for the set. As a result, the kinematic equations for the robot were elaborated to create a program capable of mass generation. To obtain and calculate the kinematics chain model that is represented by UR3 guidance is obtained from a report that produced the inverse kinematics for the UR5 and UR10[16]. The D-H parameters were obtained from Universal Robots Support  and were empirically confirmed in the lab.

 

                          Fig.6 – U.R. arm coordinate scheme for D-H parameters [16].

The Fig. 6 shows how the arm coordinate scheme was established for the UR3 to obtain the D-H parameters. In the Fig. 6 the diagram is drawn when all the joints have a value of 0º degrees. Also, the joints rotate around the z axis.The table below shows the obtained values for the D-H parameters of the UR3 robot. The points are organized starting at the base and ending at the tool mounting bracket. The values a, d and alpha are determined and later used on the forward kinematic analysis, to derivate the equations for the position.

 

 

 

 

 

 

 

adµ (alpha)
Base0�1 = 0.1519�⁄2
Upper arm�2 = −0.243700
Lower arm�3 = −0.213300
Wrist 20�4 = 0.1124�⁄2
Wrist 30�5 = 0.0854− �⁄2
Tool mounting bracket0�6 = 0.0819

Table 2 D-H parameters for the UR3

 

Using  the UR3 robot D-H parameters to  obtain the forward kinematics equations  a MatLab code was generated “Function.m”.

 

The corresponding  Values of       a, d     and alpha pare in the code.  By using the parameters  and the function “An” generate matrioces corresponding  to each of the links corresponding to the kinematic chain of the robot A1,A2,………….,A6. Finally, it calculates the multiplication of the matrice T . T returns a 4×4 matrix containing as illustrated in (12) the equations for x,y and z.

 

 

 

4.1.2. Test data generation

Generating training data sets  is essential in training an ANN.  The networks  learn from these files ,  in these project the ANN has been provided with atraining set.  The amount of pointd for the training set are determined by the training data set.The training dataset should contain both the joint angles, and the resulting x,y and, z  values. Of the position reached by the robot. Initially, creation of data  set was done using the joints value, the last joint values do not affect the end point reached. The last data data sets were created using the first five values.   The only  code requirement is to generate an input consisting of changing degree values for the joints. This is because  information is iterative ly calculated by using the forward kinematic equations.

There two ways of achieving this  were used.  In the first method random numbers are obtained in each iteration for the input to the joint values.  Then the value of x,y and, z are evaluated for each each of these combinations.  Puttinng the above method nto consideration two codes were  written ; “DataGeneration.m” and “CompleteDatageneration.m”.  The second concept is based on an interval mode. In this mode the   input  quantity for the joints consist of an initial set value  which is adde d a determined quantity in eact iteration.  All the codes outputs are stored in a text file , these data is used in the training  process.The data set values are restricted according to the workspace of the industrial arm as well as the values I which the joints wanted to be kept. The limitations of these quantities was given to a portion of the robot where it would be  needed to work.  The section where the strawberries , targets  would be used. The  output and input value are filtered to allow values in certain ranges to be recorded into the training data set. Considering the spherical workshop mentioned above:

As mentioned earlier the UR3 robot is has a 500mm reach radius [18, 19]. If a sphere was drawn with its centre  (0,0) then the robot will be seeing all the possible reaching space of the UR3. Pratically the real radius was bigger, at some point s reaching 600mm.  The company measure or the tool mounted on the robot may restrain the it from trying to overextend its joints.

  • Training ANN in MatLab: Feedforward Multilayer Perceptron Network  

The next is to train the ANN. The neural network is trained using MatLab. This computation requires a powerful P.C. due to the fact that the computations  are take along time and slow . Different versions of the “rough_code_trainall…m” were used to train the Artificial Neural Network.  The codes were provided by the  instructor.   The code trains feedforward multilayer perception . the  feedforward multilayer perception is also known as  backpropagation networks.Two different text files are defined in the code and can be changed. The first file contains input joints while the second file contains the output end position . The files are generated in prior stages of the project as described earlier.The code allows modification of various values and variables other than the files. The training parameters defined are : show, number of epochs(iterations),   goal and, max_fail. Show is the number of epochs between display for the screen such as in Fig. 7 and 8 while epochs are the number of iterations to train.

The  performance wished to be obtained is the goal.  Max_fail is the maximum number of validation failures accepted. The iterarions are chosen, the number of layers one and two are designated in the code. The code and the text file should be contained in the same folder for the network to be trained. The program is then run.  In each network , the test dataset set size and the number of neurons  per layer changed. The search for the best combination of neurons in each layer with the size of the training data sets. Multiple networks are trained using this approach until one network is arrived . The network arrived at is the closest value when tried against the robot, of a good performance value  and, lowest possible. The neuron was the best combination between training data size and the number of neurons per layer. . At the end of training the network was saved by saving the workspace. The ANN with the best performance results  in training by  MatLab and physically testing with the UR3 robot. The screen results are shown in fig.9  .this network makes us of five joint values as input , 52 neurons in the first layer and 600 neurons in the second one as can be seen in the image.

  • Weights and Biases

After the Artificial Neural Network  is trained it is saved  with a ‘.mat’ extension.  The saved file contains the network and it is used to train  MatLab to test values  by inputting a vector with the joint values.  The saved file also contains weights and biases of the network.The weights and biases re separated into six text files, one weight and one bias per layer.  For example , if the workspace contains a network “Ocean”, the command “Ocean. I.W.”  stores one input weight index, this is because the inputs are joints, it is made up of five or six values.  “Ocean. L.W.” store two layers weight matrices. The first weight matrix contains the layer one to the layer two while the second contains the layer weights from two to the last output, layer. “Ocean.b”  contains three bias vectors. The MatLab code below obtains these    values from any saved network and stores in the sixth file

4.2. Control System: Passive Motion Paradigm control and UR3 commands

The system comprises of communicating information in a correct and consistent matter from one part of the system to the other. The main’s and central regerds the Passice Motion Paradigm. The rest of the control system is composed  of the instruction being transmitted  accurately to the UR3 robot. It is illustrated in fig 10.

 

<flow chart goes here>

 

  • Passive Motion Paradigm

The Passive Motion Paradigm(PMP)  uses a set of coordinates (x,y,z), and outputs the correct joint values for  the robot. The PMP uses weights and biases of the neural network. The code reads “robotposition.txt” file the contains the location. The system attraction field value was changed during the experiment in order to find the ideal value for weights and bias provided by the ANN. It was considered a success when the values obtained by the PMP were 2mm close to the value reported.To make sure the code was had atrue achievement it was tested using 15 values from the data set and 15 other randomvalues were chosen from the free drive function of the robot. Close values were achieved with the PMP code this led to the modification of the joint corresponding attraction fields in an empirical basis. The  “’resultL.txt’” is a text file that the PMP code creates and writes . The  “’resultL.txt’” joints expected for a number of iterations are written. The text file is used for transmitting information within the system. These codes are found in the PMP folder of the code.

  • UR3 Script
  • Universal Robots script

There two approaches to program a universal robot. The first methods use athe polyscope GUI interface using the the touch screen teach pendant. The second of programming g a universal robot is by using the U.R. script language. The two methods can also be used together to to program the robot.  In this project the script language is used to program the universal robot. Script programming can program the robot by adding script command to the Polyscope running on the U.R. robot itself. To program the robot, a second P.C. client application is created by writing a program that connects to the URcontro lthrough the TCP/IP socket. There three ports that be used to send raw from from an external device namely: 30001, 30002 and, 3003. The program makes use of the port 30002 are used to establish network connection. A python class was created to enable sending commands in clear text  to the socket.  . The “UR3Routines.py” file contains the UR3 class. The file does not include  functions in the URScript programming language manual. However, it includes the function used for this research. The codes are written in python. . The URScript commands and programs are sent in form of  text on the socket. All the lines sent to the socket are terminated by “/n”. The functions that connect the robot and provide movement through the input joint values are the most important functions. The functions are explained below: The function gets the HOST and PORT values from the code and connects a robot through a socket. This connection is a requirement to send any information to the UR3 robot. if the connection does not exist then the robot will not receive any commands.

MoveAngles(self,t1,t2,t3,t4,t5,t6,a = 1.4,v = 1.05, t = 0,r=0)

This function provides the robot with six joint value in degree. The functions output is necessary the robot to move its joints to the six angles. V which receives joints position as vectors join acceleration, speed , time and blend radius. The indicated the functions are used in the programs “tryMoveTo” and “MoveToUsingRoutine” and their subsequent program varibles. “MoveToUsingRoutine” code is the intended to basically control the robotic arm in the system. The function reads the joints values from the “’resultL.txt’” text file which contains the outputs of  the PMP. MoveAngles function moves the robot to the location of the strawberry.

  • Vision System: ZED Camera, image and depth

The robot should recognize the targeted object and minimize false positives.  Wrong picks of the strawberries could led to heavy losses. The 3D information of the object should be analyzed. Recognizing target object takes p[lace in three steps. First part involves detection and recognition of the object.  The next step is to estimated its position and the finally step is to grasp the strawberry.  The process is summarized in the below diagram:

 

<flowchart goes here>

In the project a top up approach is used since the kind of objects to be identified are known . The ZED cameras can be used to compute Computer depth maps using CUDA. However, it cannot be used for distances shorter than one meter. The system depends on the camera’s ROS integration. ROS(Robot Operating System) is a collection of software frameworks for robot development. The vision system was created based on the already existing ROS wrapper for the ZED SDK.

  • Image

The image processing is based on color threshold. This due to the differences of colors among ripe strawberries, green strawberries and, green plant. From this concept a code called “colors.py was written to test object recognition by color.  Once the objects are recognized a square is drawn on top  of it as well as a word to identify the object. The code was later modified  into the “pixelpointsfin.py . T The “pixelpointsfin.py uses the same approach to identify the pixels in the image and publish them in a ROS topic. These files were written in python.  “colors.py” is mainly for visual purposes. Since information is in form of images and is constantly being published it is possible to know which object is being recognized. However, “pixelpointsfin.py” publishes the important information for the system.

  • Depth

The ZED camera, creates depth maps this means that for each pixel (x, y) in the image the camera stores a distance value (z). The depth z is calculated from the back of the camera’s left eye to the scene of the object. The depth map of the ZED camera is accessed by a code file “depthfilefin.cpp”. To obtain the current pixel value in which the strawberry is located the .cpp file subscribes itself to the ROS topic being published by “pixelpointsfin.py”. Then the depth map for this pixel is accessed and retrieved. It writes this location in a text file “robotposition.txt” which is used by the PMP in the control system. This file is constantly overwritten to have one location at a time in the file.

  • Systems integration

The system integration will make it possible for robots to continuously harvest along the strawberries rows. Once the robot has picked an entire row it stops, then carries on picking operation and then moves on when picking is finished. The system was integrated by the use of text files and ROS middleware, as it has described in each section throughout the report. The ROS bridge was used in vision system with the ZED cameras. The implementation is done using visualization tools such as ROS subscribers, publishers, launch, points and visualization tools.

  • System Calibration

Calibration is the process of determiningand adjusting an object’s accuracy. This makes sure that the objects are within the manufacturer’s specification.   The system is relly sensitive to calibration. The cameras exact location is set corresponding to the arm.  “Calibration_Alba.py” file is used to establish a relationship   between the camera and the arm. The purpose of the established relationship is to always have an accurate displacement coordinates from the camera frame of reference to the UR3 robot. For the robot’s task to be carried out successfully the strawberries must be present, the absence of the strawberries to be harvest would cause the task to be unsuccessful.

 

  1. Testing/Analysis – Evaluation
  • Control System
  • Artificial Neural Network

The task should be independent of the robot performing it. This characteristic is achieved by the use of ANN during project development. When training the ANN different values can be used for various robots and the system is still a success. The use of forward kinematics in place of inverse kinematics  is innovative. This approach is effective and time saving since the robot performed as anticipated. The data restrictions made the ANN perform better.

  • Passive Motion Paradigm

If the system had two arms then timing would be an important aspect to put into consideration. However, the project develops a single arm timing acts as a goal to fulfil. Using of two arms would increase productivity. The use of two arms would be goal oriented control instead of deadline. The challenge that would be faced by developing arobot with two arms is how to effectively cooperate them to increase picking efficiency while avoiding collision with each other. The system can be improved by using a robot with two arms .

  • Vision System

The vision of the  sensor is a vital towards achieving our goal in the project.  The ZED cameras aid in the robots vision. However the robots vision can be affected by a number of factors: light intensity, age of the sensor, incorrect assumptions among.

  • Target recognition

The target recognition faced some troubles, the troubles were mainly caused by environmental factors. The most common environmental factors are related light. Since the recognition is based on color , its dependent on a camera with high resolution and proper lighting.  When the lighting is poor as the strawberries were shade they would take another color and this would throw off the target recognition.

  • Depth obtained

The depth was obtained using the ZED cameras. The depth was always accurate to the pixels where it pointed.

  • Integration

The testing was conducted using real strawberry plants

 

  1. Project Design, Specifications

The project is made up of a pick and place based system making use of an system able to recognize objects which combines the use of UR3 robot and Visual system related to ZED camera. As mentioned before,the project is a solution to an agricultural issue and is highly reccommended. For our robotic arm, motor control is worked out based on kinematic calculations which involved making used of joint variables as input and to get an output. It should be noted that the end point solution was also used. For our algorithms Artificial Neural Network and feedforward multilayer perceptron is used mainly to obtain the movements for the arm. Data used to train our model is kinematic data obtained.   The robot is task specific that is it is designed for strawberry harvesting. The robot has to operate at a low cost and at a high speed.

Damage and bruising of ripe strawberries can occur easily, hence the robot should be designed in such a way that it minimizes the damage and bruising of ripe strawberries. This makes strawberry harvesting very challenging. This features requires strawberries to be gentle handled during manipulation procedure. Noncontact harvesting might be the approach to take to avoid damages.

Another challenge that can be faced during harvesting is that strawberries are small in size and they tend to grow in clusters. This makes it difficult to identify and pick individual strawberries. Picking strawberries in clusters with dense obstacles is a major challenge in strawberry harvesting.

Robots of different sizes and shapes may be designed to be used in different environments though the function is the same to harvest strawberries. Example a small robot designed for harvesting in small farms and greenhouses may be sized to fit within tractor tracks.

The method in mind to design the arm was the point-to-point path planning method. The method moves the arm from a starting point to a point below the target. In this method the gripper can easy swallow the below hangings, surrounding berries, leaves, and, stems along its target berry.  Due to this fact this method of harvesting was considered inappropriate since it led to a lot of loss. To solve this method the bottom-up approach is used to harvest the berries. In the bottom-up approach the arm starts by picking the berries at the bottom working all its way up.  This approach minimizes losses during the harvesting process. The default picking sequence is from left to right.

To avoid collisions with obstacles I the environment a number of methods can be put into use. The methods may include developing a search algorithm to explore the search space for a path which is checked by a collision detector. Another applicable method is use a randomized path planner so that it generates a random path tree and then each path is tested with a local path planner to determine the collision free one for pruning berries. However in the project the arms will incorporate a collision detector to avoid collision with obstacles in the environment. Obstacles cannot always be avoided especially when picking fruits small in size in clusters since the obstacles may be very close to the targets.

The algorithm used is a color-based algorithm which takes advantage of color differences and has a fast processing speed. RGB images are transformed into Hue Saturation Value (HSV). Hue Saturation Value (HSV) are used for image processing .The aim of the machine vision subsystem is to correctly detect and locate ripe strawberries.  The bounding boxes of the detected strawberries are passed to other subsystems.

When the system is integrated the arm is able to harvest continuously along the strawberries rows. The robots ability to carry out picking operation, stopping and the moving on once picking is complete is termed as static harvesting.

 

  1. Testing and Evaluation
  • Testing is difficult in unstructured environment. Unstructured environment is an environment where uncertain/unknown features to the robot exist. These environment includes farms of different sizes, crop variations, differences between farms, variety features of the crops among many others. The farms may differ even though they grow the same crop. The robot should be programed to adopt and work efficiently in both defined and undefined environments.
  • When evaluating the performance of the UR3 robot it important to determine the amount of time it takes the robot to harvest a single fruit. The evaluation should also consider the robot’s ability to pick a ripe strawberry. The correct peduncle should be identified by the robot before any picking can be done. This makes it also important to determine if the robot can recognize the correct peduncle before any picking target strawberries.
  • Manipulation in uncertain environments is a major challenge in making the harvesting robot a reality. In manipulation and detection cluster picking is difficult because it is difficult to separate the surrounding fruits, leaves, stems and, other obstacles from the target.
  • The robot should be tested multiple times during and after development to test its success when picking isolated strawberries and clustered strawberries. The picking rates should be evaluated separately to determine the efficiency of the harvester. The success rate of the harvester should be recorded when no damage is done to the berries.
  • To evaluate the arms performance a repeated test on the arm is conducted. Each axis is tested independently.
  • The project concentrates in developing a functional strawberry picking arm. However, for the arm to be fully functional and implemented in actual farms is should also incorporate a navigation system. The navigation system would make it possible to move from one row of the farm to another while picking ripe strawberries. A container to place the picked berries cold also be integrated to reduce picking time.
    • System control
    • Artificial Neural Network

As mentioned earlier one important characteristic worth mentioning is that we required our robot to independent of the tasks it performed that is we would require to change anything no matter the nature of the soft fruit. For this case, Artificial Neural Network was preffered because, can be trained with any and different values for various robots and the system will still work. The application of forward kinematics replacing the predominantly inverse kinematics used for Pick and place tasks is innovative. This approach proved to be very effective and time saving since the robot always easily reached within the location expected. Regarding the ANN itself, given that the data set was restricted a lot of the ANN performed far better inside set restrictions.

  • Passive Motion Paradigm

In terms of timing, if the system had two hands instead of one the timing would be of crucial importance to synchronize their movements. However, since it is only one arm, timing only stands as an attractor in time, a goal to fulfil, a deadline. Using two arms one of them would have to reach for the strawberry while the second one takes foliage of the path. It would be a goal oriented control instead of a deadline.

  • UR3

The method through which the UR3 received the information and dealt with the movement, action it had to perform was accurate. However, it should be improved in different manners.

  • Vision System

Robotic vision constantly depends greatly on machine learning from real-world datasets, with approaches such as deep neural networks. In this case, the neural network data sets were not used for vision wise. This helps with the timing of the system being close to a real-time operation system.

  • Target recognition

The target recognition had some troubles, this was mainly regarding environmental factors. The most relevant one was lighting. Being a target recognition based on colour, it is completely dependent on the camera having good resolution and lightning. As soon as the strawberries were shaded they would look brown to black which would throw off the target recognition.

  • Depth obtained

While the depth was obtained through the ZED camera it was always accurate to the pixels where it was pointed at. The depth map proved to be a very useful tool.

  • Integration

The testing of the integrated system was made using real strawberry plants. The  strawberry plants were acquired

6.Conclusions

This research, knocks on the door of implementing an industrial robot in an unstructured environment, tackling this new challenge for technology and automation. It should be useful in terms of a starting point to cultivate research on robotics for unstructured environments and human-robot collaboration. It expects to be a useful starting point for an autonomous fruit picking system for the field.It is an innovative approach to a traditional Pick and place task problem. When dealing with Pick and Place motion control normally the known approach is a direct program of the task, describing the exact motion or steps or points the robot must reach. However, through using an ANN and PMP control to obtain the robot motion in a changing un-structured environment this problem was tackled.

  • Recommendations/Improvements

An important suggestion for the ANN training for the UR3 control system would be to try different neural networks. This with the intended effect of finding a network more suitable to the task that is capable of learning and adapting better to the spherically constrained workspace nature of the points, as well as the limited area of this sphere. Another important recommendation would be to implement a different more accurate kind of visual target recognition. Some kind of deep learning could be implemented for a more successful not such dependant target recognition. With a more wide time frame, a ROS complete integration would also be recommended. Creating a ROS publisher and subscriber for the PMP program and information flow. Also helping develop the UR3 ROS framework which needs work. However, it could be possible to integrate the whole research in ROS at some point.

  • Limitations

For a fully functioning robot, time is required to develop the model. The development in this case restricted to a strict timespan. Further human functionalities such as reaction are not achieved in this case. Our model is therefore considered to still take a supervised approach.

  • Further Works
  • Collaborations:

A collaborative work that was undertaken at the end of this research consisted in using the ZED camera vision system developed in this dissertation, together with the control system for the UR3 industrial robot by a colleague (E. Hernandez). The collaborative work previously mentioned was used for demonstrations purposes to the media. This also stands as a test of easy integration of the systems.

  • Other Applications

One important thing to realise about this research is that its applications can be carried out in many other fields. From the automobile industry being a prime example, handful robots are the new big thing in IOT industries. Concept used here can therefore be applied in other industrial fields to provide an optimal solution. Theories,technologies and framework can further be used to improve both the model presented and others in different industries and further research of use of different algorithms.

 

 

 

 

REFERENCES

Bac, C. W., Roorda, T., Reshef, R., Berman, S., Hemming, J., & vanHenten, E. J. (2016). Analysis of a motion planning problem for sweet‐pepper harvesting in a dense obstacle environment. Biosystems Engineering, 146, 85–97.

Bangert, W., Kielhorn, A., Rahe, F., Albert, A., Biber, P., Grzonka, S., & Haug, S. (2013). Field‐robot‐based agriculture: “RemoteFarming. 1” and “BoniRob‐Apps”. 71th conference LAND. TECHNIK‐AgEng 2013, Hannover,Germany, 2193, 439–446.

Bargoti, S., & Underwood, J. P. (2017). Image segmentation for fruit detection and yield estimation in apple orchards. Journal of Field Robotics, 34(6), 1039–1060.

Botterill, T., Paulin, S., Green, R., Williams, S., Lin, J., Saxton, V., & Corbett Davies, S. (2017). A robot system for pruning grape vines. Journal of Field Robotics, 34(6), 1100–1122.

Conrad, K. L., Shiakolas, P. S., & Yih, T. C. (2000). Robotic calibration issues: Accuracy, repeatability and calibration. In Proceedings of the 8th Mediterranean Conference on Control and Automation

(MED2000), Rio, Greece, 1719, 1–6.

Cui, Y., Gejima, Y., Kobayashi, T., Hiyoshi, K., & Nagata, M. (2013). Study on cartesian‐type strawberry‐harvesting robot. Sensor Letters, 11(6‐7), 1223–1228.

Dimeas, F., Sako, D. V., Moulianitis, V. C., & Aspragathos, N. A. (2015).Design and fuzzy control of a robotic gripper for efficient strawberry harvesting. Robotica, 33(5), 1085–1098.

Feng, Q., Wang, X., Zheng, W., Qiu, Q., & Jiang, K. (2012). New strawberry harvesting robot for elevated‐trough culture. International Journal of Agricultural and Biological Engineering, 5(2), 1–8.

Feng, Q., Zou, W., Fan, P., Zhang, C., & Wang, X. (2018). Design and test of robotic harvesting system for cherry tomato. International Journal of Agricultural and Biological Engineering, 11(1), 96–100.

Fentanes, J. P., Lacerda, B., Krajník, T., Hawes, N., & Hanheide, M. (2015). Now or later? predicting and maximising success of navigation actions from long‐term experience. 2015 IEEE International Conference on Robotics and Automation (ICRA), IEEE, Seattle, USA, 1112–1117.

Fu, L., Feng, Y., Majeed, Y., Zhang, X., Zhang, J., Karkee, M., & Zhang, Q. (2018). Kiwifruit detection in field images using faster r‐cnn with zfnet. IFAC‐PapersOnLine, 51(17), 45–50.

Grimstad, L., Skattum, K., Solberg, E., Loureiro, G. D. S. M., & From, P. J. (2017). Thorvald II configuration for wheat phenotyping. IROS Workshop on Agri‐Food Robotics: Learning from Industry, Vancouver, Canada, vol. 4.

Grimstad, L., Zakaria, R., Le, T. D., & From, P. J. (2018). A novel autonomous robot for greenhouse applications. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1‐9.

Grisetti, G., Stachniss, C., & Burgard, W. (2007). Improved techniques for grid mapping. IEEE Transactions on Robotics, 23(1), 34–46.

Habaragamuwa, H., Ogawa, Y., Suzuki, T., Shiigi, T., Ono, M., & Kondo, N. (2018). Detecting greenhouse strawberries (mature and immature), using deep convolutional neural network. Engineering in Agriculture, Environment and Food, 11(3), 127–138.

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask