Collaborative robotics: a new paradigm?

Collaborative robotics has now been a reality for about ten years and finds space in various applications ranging from the loading of machine tools to quality control, passing through end-of-line packaging.
The ease of installation and the rapid return on investment have represented the main growth drivers of this technology which is carving out an increasing role in the wider robotics sector.
According to the Statistics Department of the International Federation of Robotics (IFR), collaborative robotics installations represent a constantly growing market share, from 2.8% in 2017 to 4.8% in 2019. This occurs mainly in new markets and for new applications and minimally subtracting quotas from more traditional robotics.

The very first installations of collaborative robotic systems were designed to relieve completely humans from demanding and dangerous tasks. From the simple manipulation of objects to the most recent use in welding, the mostly exploited capabilities were their ease of use, programming and reprogramming. The collaborative nature is perceived as mostly valuable during installation and while teaching new tasks.

This substitution paradigm will be short-lived. In fact, according to research by McKinsey & Company, less than 5% of tasks are fully automatable, at least with current technology, while more than 40% are fully automatable for at least 50%. It follows that we will have to expect an increase number of truly collaborative applications, with robots working side-by-side with human workers.    

Will it be just a matter of time? The one between first installations and the widespread diffusion? Or, is it a matter of technology? Probably both. In fact, it is not surprising that an analysis carried out by Boston Consulting finds that more than 90% of the companies interviewed are not yet able to take full advantage of next-generation robotics.

However, one thing is certain: looking to the future means preparing to face the most complicated challenges, that is, those applications (definitely in higher numbers, as we have seen) in which humans cannot be completely replaced by automation, but one will have to necessarily make sure that the two natures, the human and the artificial ones, coexist and are functional to each other.
And this is probably where research and innovation efforts should be concentrated. The robots are equipped with sophisticated techniques to ensure the safety of the operators, of course, and the applications are always certified according to standards. But safety is not the only enabling factor, albeit necessary, for the collaboration between humans and robots.

In terms of collaboration, strictly speaking, something more is expected, going beyond mere co-existence, even sometimes occasional. Collaborating, from the late Latin collabōrare which means working together, is a relatively simple and natural activity for two persons. Perhaps it is also as such between two robots. Between humans and robots, however, the levels of effectiveness and reliability are still not satisfactory. How come? Combining two components that are so different from each other, who speak different languages, is certainly not a walk in the park.

People do not communicate with each other only in natural language, they do so in many other ways and not necessarily verbally. Gestures, body language, expressiveness, are all methods of communication that can be easily interpreted by a person, but difficult to understand by a machine.

First of all we are talking about robots that have very limited sensory abilities. Often they are not equipped with vision systems, if not occasionally, and certainly not to observe the “human colleague”.
Observing and understanding, relating spatially and temporally the elements that make up the scene, for example the workspace, are activities that we do naturally, without even realizing it. But how could a robot do it?

From the point of view of the “sense organs” it is easy to say. We have now refined the technologies for artificial vision, managing to obtain several megabytes (just think that a smartphone has a camera of at least 12 megapixels) from a sensor of a few square centimetres. Indeed, resolution of the images is not the actual problem, nor is their level of detail. To date, the weak point is the ability to distinguish and relate the different elements of a scene.
Cognitive vision, that is the set of techniques ranging from image analysis (computer vision) to machine learning, will probably constitute the keystone for a further development of collaborative robotics. Sensors and cameras, together with sophisticated algorithms, will allow robots to understand the context in which they operate, and in which their “human colleagues” also operate. They will be able to share their workspace with humans, being ready to take-over, when needed, supporting the humans, instead of completely replacing them.

In our lab at Politecnico di Milano, we are seriously tackling this problem. We aim at combining computer vision, object detection, and artificial reasoning to facilitate the collaboration between humans and robots in manufacturing assembly tasks.

The video shows a human-robot collaborative assembly application for automotive components, namely the rear braking system of a motorbike. The worker is responsible for the assembly of the oil tank and for securing it to the pump, while a collaborative robot, the ABB YuMi, has to perform the pre-assembly of the oil tube connection from the caliper to the pump. The operator is finally responsible for the tightening of three screws, two of them using a pneumatic ratchet, the other using an electric screwdriver.

Object detection and human tracking are combined to understand the ongoing activity of the human operator, and to synchronise the robot accordingly.

References:

  1. Fraunhofer InstItute for IndustrIal engIneerIng (IAO), “Lightweight robots in manual assembly – best to start simply!”, 2016
  2. McKinsey Global Institute, “A future that works: automation, employment, and productivity”, 2017
  3. Boston Consulting Group, “Advanced Robotics in the Factory of the Future”, 2019
  4. N. Lucci, A. Monguzzi, A.M. Zanchettin, P. Rocco – “Human activity modelling and recognition for collaborative robotics based on hand-object interaction”, submitted.

When robots can decide what to do

The word “cobot” denotes a robot optimised for the collaboration with humans. Traditional industrial robotics guarantees high efficiency and repeatability for mass production but it lacks flexibility to deal with the fast changes in the consumers’ demand. Humans, on the other hand, can face such uncertainties and variability but they are limited by their physical capabilities, in terms of repeatability, physical strength, endurance, speed etc. The human-robot collaboration is a productive balance that catches the benefits from both industrial automation and human work.

In traditional automation, decisions are frequently driven by PLC logics. In discrete manufacturing, the problem of how to choose when two or more possibilities are simultaneously available may arise.
Precedence rules are normally adopted. Sometimes, the definition of these rules is based on a priori knowledge of the system. Most often, they rely on intuition of the programmer who implements simple tie-breaking rules with no clear foundation in terms of optimality.

A static job scheduling determines which operations have to be executed by either the human or the robot in an a-priori way. This methodology can be useful when changes in the workplace are not observable, agent performance is not measurable and/or the system is observable and measurable but the agents are not controllable anymore once the task has begun.

But how about allowing robots to learn optimal decisions from experience? … moreover, what if the robot can learn by running what-if analyses within a digitalised environment (i.e. a digital twin)?
By collecting production data from the physical system, the digital twin can progressively tune its parameters so to fit the actual behaviour of the system.
Based on these parameters, simulations or what-if analyses can be run to predict the effects of decisions, and select which decision will actually provide the best outcome in terms of performance or productivity. The key idea is sketched in the following.

Reinforcement learning is the process adopted to learn what to do (the policy) based on the experience. While typical demonstration of such a method relies on an initial trial-and-error phase on the real system, the approach of learning on the digital replica of the system has obvious advantages, including a faster learning rate.

In our Lab we run a series of verification tests in a collaborative human-robot assembly scenario. The product to be assembled is the Domyos Shaker 500 ml produced by Decathlon. The robot and the operator collaborate in assembly six identical products. Job allocation and job sequencing problems are dynamically optimised.

Below, a video of the application.

References:

  1. G. Fioravanti, D. Sartori, “Collaborative robot scheduling based on reinforcement learning in industrial assembly tasks”, MSc Thesis at Politecnico di Milano, 2020
  2. R. S. Sutton, A. G. Barto, “Reinforcement Learning: An Introduction”. MIT Press, 1998

Robotics gives humans some relief

One of the benefits arising from the adoption of collaborative robots is the possibility to share or carry the load during transportation or manipulation. This type of collaborative operation can clearly reduce the muscular effort of the human, and possibly increase the quality of working environment.
Musculoskeletal disorders (MSDs), in fact, represent one of the major work-related health problem in developed countries, affecting almost 50% of industrial workers. As MSDs are mainly due to strenuous biomechanical solicitations due, e.g. to payload transportation or bad postures, it is widely agreed that collaborative robots can help in preserving employees’ health by taking up physically demanding tasks which are too complex to be fully automated.

There are several factors influencing the ergonomic assessment of a certain pose. The most crucial ones are certainly the body posture, which might lead to severe static joint load, the payload or the external force exerted on the musculoskeletal system and the rapidity of the motion, e.g. the presence of large accelerations. Differently from other works in which the robot simply acts as a passive positioner, a proper motion of the robot is proposed in order to mitigate this effect.

The key idea is then to move the robot proportionally to the displacement of the human with respect to the ergonomic reference posture. By doing so, we expect the human to compensate for this movement by moving towards a more ergonomic posture.

At Politecnico di Milano, we developed and applied this concept to a spray painting task of a car bumper. Differently from the case of car production, in an aftermarket scenario this task is hard to be fully automated, due to the high variability of parts being manipulated.

The relationship between the average value of displacement from the ergonomic posture and the amount of assistance given by the robot is an important KPI (key performance indicator). A correlation analysis shown that these two quantities are negatively correlated.

Another important indicator, which is more related to the usability of the application, is the accuracy of the operator in performing the given task while the robot is moving. Quite surprisingly, the average accuracy is not significantly influenced by the motion of the robot. Finally, beside the postural and the accuracy analyses, an important performance indicator is the time required to perform the prescribed task by the collaborative system, which has been reduced by around 25%.

To summarise the outcomes of the experimental campaign, we can clearly state that the intelligent object handling strategy introduced in this work is responsible for:

  • smaller motions of the tool handled by the operator;
  • approximately the same level of accuracy in executing the task;
  • shorter cycle times for the collaborative painting operation

which clearly result in a better productivity, a lower exposition to musculoskeletal disorders and, in turn, no substantial modification in the quality of the produced good.

References:

  1. A.M. Zanchettin, E. Lotano, P. Rocco – “Collaborative Robot Assistant for the Ergonomic Manipulation of Cumbersome Objects”, IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, Macau, November 4th-8th, 2019.
  2. OSH in figures: Work-related musculoskeletal disorders in the EU – Facts and figures, http://osha.europa.eu/.
  3. P. Maurice, V. Padois, Y. Measson, and P. Bidaud, “A digital human tool for guiding the ergonomic design of collaborative robots,” in 4th International Digital Human Modeling Symposium (DHM 2016), 2016.

Industry 4.0: towards an intelligent collaboration with robots

How many times, at the grocery shop, while waiting for our turn to be served we naturally asked ourselves: should I just queue and wait or should I swing by another aisle in the meanwhile? Well… just a simple question that however entails quite a few reasoning: how fast are the attendants in serving other people? how much time do I have without loosing my turn?

Now, let’s virtually move this paradigm to the factory of the future. Our guest star is now a collaborative robot that has to decide when the human fellow co-worker will require its assistance.

This is what we are currently doing in our lab at Politecnico di Milano to allow the robot to answer the following questions:

  • which activity is the human operator more likely to perform next?
  • what is the time when an activity requiring my assistance is expected to be initiated by the human?

But let’s proceed step by step…

The first ingredient that we need is an efficient way to categorise human’s actions. For this goal, we used some Bayesian statistics and some computer vision. The result is shown in this video: the robot completes the operation initiated by the human.

Now we have correctly characterised what the human is doing, but what’s next? Well, based on some machine learning algorithms, and specifically based on pattern recognition, we were also able to predict the next sequence of actions and their duration. The result is shown in the next video: the robot is autonomously responsible for a quality control task, while the operator is involved in some assembly operations. As soon as the human completes his task, the collaborative phase can start and the robot is ready to help. The promptness of the robot is achieved thanks to the observation of previous executions and allows the robot itself to be ready when required by the human.

The algorithm has been compared to a purely reactive approach, during which the robot always starts its own task unless the human has already initiated the collaborative phase. The proactive behavior outperforms the purely reactive one by reducing the cycle time and its variability, hence achieving the perfect production leveling (or heijunka, 平準化, in Japanese).

F18BE706-DDA7-4601-9097-5763688BC9CE

References:

  1. A.M. Zanchettin, P. Rocco – “Probabilistic inference of human arm reaching target for effective human-robot collaboration”, IROS 2017, Vancouver (Canada), September 24th – 28th, 2017.
  2. A.M. Zanchettin, A. Casalino, L. Piroddi, P. Rocco – “Prediction of human activity patterns for human-robot collaborative assembly tasks”, IEEE Transactions on Industrial Informatics.