{"id":1680,"date":"2020-07-29T11:53:32","date_gmt":"2020-07-29T14:53:32","guid":{"rendered":"http:\/\/web.inf.ufpr.br\/vri\/?page_id=1680"},"modified":"2022-04-11T20:56:36","modified_gmt":"2022-04-11T23:56:36","slug":"amr-unconstrained-scenarios","status":"publish","type":"page","link":"https:\/\/web.inf.ufpr.br\/vri\/publications\/amr-unconstrained-scenarios\/","title":{"rendered":"Towards Image-based Automatic Meter Reading in Unconstrained Scenarios: A Robust and Efficient Approach"},"content":{"rendered":"<h3><img fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter wp-image-1872 size-full\" src=\"http:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/03\/pipeline.png\" alt=\"\" width=\"1464\" height=\"316\" srcset=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/03\/pipeline.png 1464w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/03\/pipeline-300x65.png 300w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/03\/pipeline-1024x221.png 1024w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/03\/pipeline-768x166.png 768w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/03\/pipeline-360x78.png 360w\" sizes=\"(max-width: 1464px) 100vw, 1464px\" \/><\/h3>\n<h2 id=\"paper-information\" class=\"link_hover_visible\"><strong>1. Paper Information<\/strong><\/h2>\n<h3 id=\"authors\" class=\"link_hover_visible\"><b>1.1. Authors<\/b><\/h3>\n<p><i>Rayson Laroca, Alessandra B. Araujo, Luiz A. Zanlorensi, Eduardo C. de Almeida, David Menotti.<\/i><\/p>\n<h3 id=\"abstract\" class=\"link_hover_visible\"><b>1.2. Abstract\u00a0<\/b><\/h3>\n<p><em>Existing approaches for image-based Automatic Meter Reading (AMR) have been evaluated on images captured in well-controlled scenarios. However, real-world meter reading presents unconstrained scenarios that are way more challenging due to dirt, various lighting conditions, scale variations, in-plane and out-of-plane rotations, among other factors. In this work, we present an end-to-end approach for AMR focusing on unconstrained scenarios. Our main contribution is the insertion of a new stage in the AMR pipeline, called corner detection and counter classification, which enables the counter region to be rectified \u2013 as well as the rejection of illegible\/faulty meters \u2013 prior to the recognition stage. We also introduce a publicly available dataset, called Copel-AMR, that contains 12,500 meter images acquired in the field by the service company\u2019s employees themselves, including 2,500 images of faulty meters or cases where the reading is illegible due to occlusions. Experimental evaluation demonstrates that the proposed system outperforms six baselines in terms of recognition rate while still being quite efficient. Moreover, as very few reading errors are tolerated in real-world applications, we show that our AMR system achieves impressive recognition rates (i.e., \u2265 99%) when rejecting readings made with lower confidence values.<\/em><\/p>\n<h3 id=\"citation\"><b>1.3. Citation<\/b><\/h3>\n<p class=\"indent\">If you use the models trained by us or the Copel-AMR dataset in your research, please cite our paper:<\/p>\n<ul>\n<li class=\"indent\">R. Laroca, A. B. Araujo, L. A. Zanlorensi, E. C. de Almeida, D. Menotti, &#8220;Towards Image-based Automatic Meter Reading in Unconstrained Scenarios: A Robust and Efficient Approach,&#8221; IEEE Access, vol. 9, pp. 67569-67584, 2021. [<a title=\"IEEE Xplore\" href=\"https:\/\/doi.org\/10.1109\/ACCESS.2021.3077415\" target=\"_blank\" rel=\"noopener\"><strong>IEEE Xplore<\/strong><\/a>] [<strong><a href=\"https:\/\/ieeexplore.ieee.org\/stamp\/stamp.jsp?tp=&amp;arnumber=9422699\" target=\"_blank\" rel=\"noopener\">PDF<\/a><\/strong>] [<a href=\"https:\/\/raysonlaroca.github.io\/bibtex\/laroca2021towards.txt\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\"><strong>BibTeX<\/strong><\/a>]<\/li>\n<\/ul>\n<h2 id=\"downloads\" class=\"link_hover_visible\"><b>2. Downloads<\/b><\/h2>\n<h3 id=\"proposed-system\" class=\"link_hover_visible\"><strong>2.1. Proposed AMR System<\/strong><\/h3>\n<p class=\"indent\">The proposed approach consists of three main stages: (i) counter detection, (ii) corner detection and counter classification, and (iii) counter recognition. Given an input image, the counter region is located using a modified version of the Fast-YOLOv4 model, called Fast-YOLOv4-SmallObj. Then, in a single forward pass of the proposed Corner Detection and Counter Classification <span class=\"nobr\">Network (CDCC-NET),<\/span> the cropped counter is classified as operational\/legible or faulty\/illegible and the position (x, y) of each of its corners is predicted. Finally, illegible counters are rejected, while legible ones are rectified and fed into our recognition network,\u00a0<span class=\"nobr\">called Fast-OCR.<\/span><\/p>\n<p class=\"indent\">The YOLO-based models (i.e.,\u00a0<i>Fast-YOLOv4-SmallObj<\/i>\u00a0and\u00a0<i>Fast-OCR<\/i>) were trained using the\u00a0<a href=\"https:\/\/github.com\/AlexeyAB\/darknet\/\" target=\"_blank\" rel=\"noopener noreferrer\">Darknet<\/a>\u00a0framework, while\u00a0<i>CDCC-NET<\/i>\u00a0was trained using\u00a0<a href=\"https:\/\/keras.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Keras<\/a>. The architectures and weights can be downloaded <a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/amr-unconstrained-scenarios.zip\" target=\"_blank\" rel=\"noopener\">here (.zip file)<\/a> or through the links below:<\/p>\n<ul>\n<li class=\"indent_triple\">Counter Detection (<i>Fast-YOLOv4-SmallObj<\/i>):\u00a0<a href=\"http:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/counter-detection.cfg\" target=\"_blank\" rel=\"noopener noreferrer\" data-wplink-edit=\"true\">network descriptor (architecture)<\/a>, <a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/counter-detection.weights\" target=\"_blank\" rel=\"noopener noreferrer\">weights<\/a>, <a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/counter-detection.data\" target=\"_blank\" rel=\"noopener noreferrer\">data descriptor<\/a>, <a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/counter-detection.names\" target=\"_blank\" rel=\"noopener noreferrer\">classes<\/a><\/li>\n<li class=\"indent_triple\">Corner Detection and Counter Classification (<i>CDCC-NET<\/i>): <a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/CDCC-NET.hdf5\" target=\"_blank\" rel=\"noopener noreferrer\">Keras model (architecture + weights + optimizer state)<\/a><\/li>\n<li class=\"indent_triple\">Counter Recognition (<i>Fast-OCR<\/i>):\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/counter-recognition.cfg\" target=\"_blank\" rel=\"noopener noreferrer\">network descriptor (architecture)<\/a>, <a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/counter-recognition.weights\" target=\"_blank\" rel=\"noopener noreferrer\">weights<\/a>, <a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/counter-recognition.data\" target=\"_blank\" rel=\"noopener noreferrer\">data descriptor<\/a>,\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/counter-recognition.names\" target=\"_blank\" rel=\"noopener noreferrer\">classes<\/a><\/li>\n<li>Miscellaneous: <a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/amr-unconstrained-scenarios\/data\/README.txt\" target=\"_blank\" rel=\"noopener\">README<\/a><\/li>\n<\/ul>\n<h3 id=\"dataset\" class=\"link_hover_visible\"><b>2.2. Copel-AMR Dataset<\/b><\/h3>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-1641 size-full\" src=\"http:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2020\/07\/dataset.jpg\" alt=\"Some images extracted from the Copel-AMR Dataset\" width=\"1076\" height=\"470\" srcset=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2020\/07\/dataset.jpg 1076w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2020\/07\/dataset-300x131.jpg 300w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2020\/07\/dataset-768x335.jpg 768w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2020\/07\/dataset-1024x447.jpg 1024w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2020\/07\/dataset-360x157.jpg 360w\" sizes=\"(max-width: 1076px) 100vw, 1076px\" \/><\/p>\n<p class=\"indent\">The proposed dataset has six times more images and contains a larger variety in different aspects than the largest dataset found in the literature for the evaluation of end-to-end AMR methods. It also contains a well-defined evaluation protocol to assist the development of new approaches for AMR as well as the fair comparison among published works.<\/p>\n<p class=\"indent\">Full details regarding the dataset, including download instructions, can be\u00a0<span class=\"nobr\">seen\u00a0<a href=\"https:\/\/web.inf.ufpr.br\/vri\/databases\/copel-amr\/\" target=\"_blank\" rel=\"noopener noreferrer\"><b>here<\/b><\/a>.<\/span><\/p>\n<h3 id=\"annotations\"><b>2.3. Additional Annotations<\/b><\/h3>\n<p class=\"indent\">As the UFPR-AMR dataset, available <a href=\"http:\/\/web.inf.ufpr.br\/vri\/databases\/ufpr-amr\/\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>, does not have any annotations related to the corners of the counters, we manually labeled their positions in its 2,000 images so that we can use images from both datasets (i.e., Copel-AMR and UFPR-AMR) to train and evaluate the CDCC-NET model. These annotations are provided along with the <span class=\"nobr\">Copel-AMR Dataset.<\/span><\/p>\n<h2 id=\"contact\" class=\"link_hover_visible\"><b>3. Related Work<\/b><\/h2>\n<p>You may also be interested in our previous research, where we introduced the\u00a0<a title=\"UFPR-AMR Dataset\" href=\"https:\/\/web.inf.ufpr.br\/vri\/databases\/ufpr-amr\/\" target=\"_blank\" rel=\"noreferrer noopener\">UFPR-AMR<\/a>\u00a0dataset:<\/p>\n<ul>\n<li>R. Laroca, V. Barroso, M. A. Diniz, G. R. Gon\u00e7alves, W. R. Schwartz, D. Menotti, \u201cConvolutional Neural Networks for Automatic Meter Reading,\u201d Journal of Electronic Imaging, vol. 28, no. 1, p. 013023, 2019.\u00a0 [<a href=\"https:\/\/web.inf.ufpr.br\/vri\/publications\/laroca2019convolutional\/\" target=\"_blank\" rel=\"noreferrer noopener\">Webpage<\/a>]\u00a0[<a href=\"https:\/\/www.spiedigitallibrary.org\/journals\/journal-of-electronic-imaging\/volume-28\/issue-01\/013023\/Convolutional-neural-networks-for-automatic-meter-reading\/10.1117\/1.JEI.28.1.013023.full\" target=\"_blank\" rel=\"noreferrer noopener\">SPIE Digital Library<\/a>]\u00a0[<a href=\"http:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/08\/laroca2019convolutional.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">PDF<\/a>] [<a href=\"https:\/\/raysonlaroca.github.io\/bibtex\/laroca2019convolutional.txt\" target=\"_blank\" rel=\"noreferrer noopener\">BibTeX<\/a>] [<a href=\"http:\/\/www.inf.ufpr.br\/rblsantos\/misc\/copyright\/laroca2019convolutional.txt\" target=\"_blank\" rel=\"noreferrer noopener\">Copyright Notice<\/a>]<\/li>\n<\/ul>\n<p>A list of all papers on AMR published by us can be seen\u00a0<a href=\"https:\/\/scholar.google.com\/scholar?hl=pt-BR&amp;as_sdt=0%2C5&amp;as_ylo=2019&amp;q=allintitle%3A+meter+reading+author%3A%22David+Menotti%22&amp;btnG=\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n<h2 id=\"contact\" class=\"link_hover_visible\"><b>4. Contact\u00a0<\/b><\/h2>\n<p>Please contact the first author (<a href=\"mailto:rblsantos@inf.ufpr.br\" rel=\"noopener noreferrer\">Rayson Laroca<\/a>) with questions or comments.<\/p>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>1. Paper Information 1.1. Authors Rayson Laroca, Alessandra B. Araujo, Luiz A. Zanlorensi, Eduardo C. de Almeida, David Menotti. 1.2. Abstract\u00a0 Existing approaches for image-based Automatic Meter Reading (AMR) have been evaluated on images captured in well-controlled scenarios. However, real-world <a href=\"https:\/\/web.inf.ufpr.br\/vri\/publications\/amr-unconstrained-scenarios\/\" class=\"read-more\">Read More &#8230;<\/a><\/p>\n","protected":false},"author":55,"featured_media":0,"parent":1252,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1680","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages\/1680","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/users\/55"}],"replies":[{"embeddable":true,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/comments?post=1680"}],"version-history":[{"count":71,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages\/1680\/revisions"}],"predecessor-version":[{"id":2000,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages\/1680\/revisions\/2000"}],"up":[{"embeddable":true,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages\/1252"}],"wp:attachment":[{"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/media?parent=1680"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}