{"id":1274,"date":"2019-08-27T16:24:13","date_gmt":"2019-08-27T19:24:13","guid":{"rendered":"http:\/\/web.inf.ufpr.br\/vri\/?page_id=1274"},"modified":"2022-01-21T10:49:24","modified_gmt":"2022-01-21T13:49:24","slug":"layout-independent-alpr","status":"publish","type":"page","link":"https:\/\/web.inf.ufpr.br\/vri\/publications\/layout-independent-alpr\/","title":{"rendered":"An Efficient and Layout-Independent Automatic License Plate Recognition System Based on the YOLO Detector"},"content":{"rendered":"\n\n\n<figure class=\"wp-block-image size-large is-style-default\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"216\" src=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/02\/pipeline3-alpr-1024x216.png\" alt=\"\" class=\"wp-image-1843\" srcset=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/02\/pipeline3-alpr-1024x216.png 1024w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/02\/pipeline3-alpr-300x63.png 300w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/02\/pipeline3-alpr-768x162.png 768w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/02\/pipeline3-alpr-360x76.png 360w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/02\/pipeline3-alpr.png 1454w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"paper-information\"><strong>1. Paper Information&nbsp;<\/strong><\/h3>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"authors\"><strong>1.1. Authors&nbsp;<\/strong><\/h4>\n\n\n<p><em>Rayson Laroca, Luiz A. Zanlorensi, Gabriel R. Gon\u00e7alves, Eduardo Todt, William Robson Schwartz, David Menotti.<\/em><\/p>\n\n\n<h4 class=\"wp-block-heading\" id=\"abstract\"><strong>1.2. Abstract&nbsp;<\/strong><\/h4>\n\n\n\n<p><em>This paper presents an efficient and layout-independent Automatic License Plate Recognition (ALPR) system based on the state-of-the-art YOLO object detector that contains a unified approach for license plate (LP) detection and layout classification to improve the recognition results using post-processing rules. The system is conceived by evaluating and optimizing different models, aiming at achieving the best speed\/accuracy trade-off at each stage. The networks are trained using images from several datasets, with the addition of various data augmentation techniques, so that they are robust under different conditions. The proposed system achieved an average end-to-end recognition rate of 96.9% across eight public datasets (from five different regions) used in the experiments, outperforming both previous works and commercial systems in the ChineseLP, OpenALPR-EU, SSIG-SegPlate and UFPR-ALPR datasets. In the other datasets, the proposed approach achieved competitive results to those attained by the baselines. Our system also achieved impressive frames per second (FPS) rates on a high-end GPU, being able to perform in real time even when there are four vehicles in the scene. An additional contribution is that we manually labeled 38,351 bounding boxes on 6,239 images from public datasets and made the annotations publicly available to the research community.<\/em><\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"references\"><strong>1.3. Citation&nbsp;<\/strong><\/h4>\n\n\n<p>If you use our trained models or the annotations provided by us in your research, please cite our paper:<\/p>\n<ul>\n<li>R. Laroca, L. A. Zanlorensi, G. R. Gon\u00e7alves, E. Todt, W. R. Schwartz, D. Menotti,&nbsp;<em>\u201cAn Efficient and Layout-Independent Automatic License Plate Recognition System Based on the YOLO Detector,\u201d<\/em> IET Intelligent Transport Systems, vol. 15, no. 4, pp. 483-503, 2021. [<strong><a href=\"http:\/\/doi.org\/10.1049\/itr2.12030\" target=\"_blank\" rel=\"noopener\">Wiley<\/a><\/strong>] [<strong><a href=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2021\/05\/laroca2021efficient-published.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a><\/strong>] [<a href=\"https:\/\/raysonlaroca.github.io\/bibtex\/laroca2018robust.txt\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\"><strong>BibTeX<\/strong><\/a>]<\/li>\n<\/ul>\n\n\n<p>You may also be interested in the&nbsp;<span style=\"color:#eb0f0f\" class=\"tadv-color\">conference version<\/span>&nbsp;of this paper, where we introduced the&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/web.inf.ufpr.br\/vri\/databases\/ufpr-alpr\/\" target=\"_blank\">UFPR-ALPR<\/a>&nbsp;dataset:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>R. Laroca,&nbsp;E. Severo,&nbsp;L. A. Zanlorensi,&nbsp;L. S. Oliveira,&nbsp;G. R. Gon\u00e7alves,&nbsp;W. R. Schwartz,&nbsp;D. Menotti,&nbsp;<em>\u201cA Robust Real-Time Automatic License Plate Recognition Based on the YOLO Detector,\u201d<\/em>&nbsp;in International Joint Conference on Neural&nbsp;Networks (IJCNN),&nbsp;July 2018, pp. 1\u201310.&nbsp;[<a aria-label=\"Webpage (opens in a new tab)\" rel=\"noreferrer noopener\" href=\"https:\/\/web.inf.ufpr.br\/vri\/publications\/laroca2018robust\/\" target=\"_blank\">Webpage<\/a>] [<a rel=\"noreferrer noopener\" href=\"https:\/\/ieeexplore.ieee.org\/document\/8489629\" target=\"_blank\">IEEE Xplore<\/a>] [<a aria-label=\" (opens in a new tab)\" rel=\"noreferrer noopener\" href=\"http:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/08\/laroca2018robust.pdf\" target=\"_blank\">PDF<\/a>] [<a href=\"https:\/\/raysonlaroca.github.io\/bibtex\/laroca2021efficient.txt\" target=\"_blank\" rel=\"noreferrer noopener\">BibTeX<\/a>] [<a aria-label=\" (opens in a new tab)\" rel=\"noreferrer noopener\" href=\"http:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/08\/laroca2018robust-1.pdf\" target=\"_blank\">Presentation<\/a>] <strong>[<\/strong><a rel=\"noreferrer noopener\" href=\"https:\/\/news.developer.nvidia.com\/researchers-develop-ai-system-for-license-plate-recognition\/\" target=\"_blank\">NVIDIA News Center<\/a><strong>]<\/strong> [<a rel=\"noreferrer noopener\" href=\"https:\/\/www.youtube.com\/watch?v=XALyMj_hBvU\" target=\"_blank\">Video Demonstration<\/a>]<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"downloads\"><strong>2. Downloads&nbsp;<\/strong><\/h3>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"proposed-system\"><strong>2.1. Proposed ALPR System<\/strong><\/h4>\n\n\n\n<p>The&nbsp;<a href=\"https:\/\/github.com\/pjreddie\/darknet\/\" target=\"_blank\" rel=\"noreferrer noopener\">Darknet framework<\/a>&nbsp;was employed to train and test our networks. However, we used <a href=\"https:\/\/github.com\/AlexeyAB\/darknet\/\" target=\"_blank\" rel=\"noreferrer noopener\">AlexeyAB\u2019s version of Darknet<\/a>, which has several improvements over the original, including improved neural network performance by merging two layers into one (convolutional and batch normalization), optimized memory allocation during network resizing, and many other code fixes.<\/p>\n\n\n\n<p>The architectures and weights can be downloaded below:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Vehicle Detection:\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/vehicle-detection.cfg\" target=\"_blank\" rel=\"noreferrer noopener\">network descriptor<\/a>,\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/vehicle-detection.data\" target=\"_blank\" rel=\"noreferrer noopener\">data descriptor<\/a>,\u00a0<a rel=\"noreferrer noopener\" href=\"http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/vehicle-detection.weights\" target=\"_blank\">weights<\/a>,\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/vehicle-detection.names\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/vehicle-detection.names\">classes<\/a><\/li><li>LP Detection and Layout Classification:\u00a0<a rel=\"noreferrer noopener\" href=\"http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-detection-layout-classification.cfg\" target=\"_blank\">network descriptor<\/a>,\u00a0<a rel=\"noreferrer noopener\" href=\"http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-detection-layout-classification.data\" target=\"_blank\">data descriptor<\/a>,\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-recognition.names\" target=\"_blank\" rel=\"noreferrer noopener\">weights<\/a>,\u00a0<a rel=\"noreferrer noopener\" href=\"http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-detection-layout-classification.names\" target=\"_blank\">classes<\/a><\/li><li>LP Recognition:\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-recognition.cfg\" target=\"_blank\" rel=\"noreferrer noopener\">network descriptor<\/a>,\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-recognition.data\" target=\"_blank\" rel=\"noreferrer noopener\">data descriptor<\/a>,\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-recognition.weights\" target=\"_blank\" rel=\"noreferrer noopener\">weights<\/a>,\u00a0<a href=\"https:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-recognition.names\" target=\"_blank\" rel=\"noreferrer noopener\">classes<\/a><\/li><\/ul>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"annotations\"><strong>2.2. Annotations&nbsp;<\/strong><\/h4>\n\n\n\n<p>We manually annotated the position of the vehicles, LPs and characters, as well as their classes, in each image of the public datasets used in this work that have no annotations or contain labels only for part of the ALPR pipeline. Specifically, we manually labeled&nbsp;<strong>38,351&nbsp;bounding boxes on&nbsp;6,239&nbsp;images<\/strong>. The data available for download in this subsection consists only of annotations, as the images used to train\/evaluate our networks are from public datasets not owned by us or that contain&nbsp;license agreements.<\/p>\n\n\n\n<p>Before you can download the annotations, we kindly ask you to register by sending an e-mail with the following subject:&nbsp;<strong>\u201cALPR Annotations\u201d<\/strong> to the first author (<a rel=\"noreferrer noopener\" href=\"mailto:rblsantos@inf.ufpr.br\" target=\"_blank\">rblsantos@inf.ufpr.br<\/a>), so that we can know who is using the provided data and notify you of future updates. Please include your name, affiliation and department in the e-mail. Once you have registered, you will receive a link to download the database. In general, a download link will take 1-3 workdays&nbsp;to issue.<\/p>\n\n\n\n<p>The list of all images not used in our experiments is provided along with the annotations.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"running\"><strong>3. Running (Linux)&nbsp;<\/strong><\/h3>\n\n\n\n<p>For the following commands, we assume that you have&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/github.com\/AlexeyAB\/darknet\" target=\"_blank\">AlexeyAB&#8217;s version of the Darknet framework<\/a>&nbsp;correctly compiled. Remember that we achieved real-time using an AMD Ryzen Threadripper 1920X 3.5GHz CPU, 32 GB of RAM, and an NVIDIA Titan Xp GPU.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"#settingup\"><strong>3.1. Setting up the environment<\/strong><\/h4>\n\n\n\n<p>Go to your Darknet framework folder and download the weights and configuration files (data\/network descriptors and class names) for the networks. You can use the following commands (or click <a href=\"http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/layout-independent-alpr.zip\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>):<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\" style=\"font-size: 11px\">wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/vehicle-detection.cfg<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/vehicle-detection.data<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/vehicle-detection.weights<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/vehicle-detection.names<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-detection-layout-classification.cfg<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-detection-layout-classification.data<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-detection-layout-classification.weights<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-detection-layout-classification.names<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-recognition.cfg<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-recognition.data<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-recognition.weights<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/lp-recognition.names<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/sample-image.jpg<br>wget http:\/\/www.inf.ufpr.br\/vri\/databases\/layout-independent-alpr\/data\/README.txt<\/pre>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"#vehicle-detection\"><strong>3.2. Vehicle Detection<\/strong><\/h4>\n\n\n\n<p>The first stage in our approach is vehicle detection using a model based on YOLOv2 (all modifications made by us to this model are described in the paper). Thus, considering an example image (<em>sample-image.jpg<\/em>), use the following command:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\" style=\"font-size: 11px\">.\/darknet detector test vehicle-detection.data vehicle-detection.cfg vehicle-detection.weights -thresh .25 &lt;&lt;&lt; sample-image.jpg<\/pre>\n\n\n\n<p>As a result, you should see an image like the one below.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" class=\"aligncenter wp-image-1304\" src=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-vd.jpg\" alt=\"\" width=\"500\" height=\"375\" srcset=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-vd.jpg 640w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-vd-300x225.jpg 300w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-vd-360x270.jpg 360w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/figure>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"#lp-detection-layout-classification\"><strong>3.3. License Plate Detection and Layout Classification<\/strong><\/h4>\n\n\n\n<p>Once we have the vehicle patches, you must crop them and feed each into the modified Fast-YOLOv2 network. Assuming that we crop the car and motorcycle patches and named them as&nbsp;<em>car.jpg<\/em>&nbsp;and&nbsp;<em>motorcycle.jpg<\/em>, respectively, use the following commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\" style=\"font-size: 11px\">.\/darknet detector test lp-detection-layout-classification.data lp-detection-layout-classification.cfg lp-detection-layout-classification.weights -thresh .01 &lt;&lt;&lt; motorcycle.jpg<br>.\/darknet detector test lp-detection-layout-classification.data lp-detection-layout-classification.cfg lp-detection-layout-classification.weights -thresh .01 &lt;&lt;&lt; car.jpg<\/pre>\n\n\n\n<p>Considering only the detection with the highest confidence value in each image, the results should be:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" class=\"aligncenter wp-image-1308\" src=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-lpd.jpg\" alt=\"\" width=\"510\" height=\"251\" srcset=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-lpd.jpg 595w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-lpd-300x148.jpg 300w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-lpd-360x177.jpg 360w\" sizes=\"(max-width: 510px) 100vw, 510px\" \/><\/figure>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"#lp-recognition\"><strong>3.4. License Plate Recognition<\/strong><\/h4>\n\n\n\n<p>Finally, on each vehicle patch, we need to crop the bounding box of the license plate found in the previous stage (enlarging it so that they have aspect ratios (<em>w<\/em>&nbsp;\/&nbsp;<em>h<\/em>) between 2.5 and 3.0). Suppose the new image files are named&nbsp;<em>lp-car.jpg<\/em>&nbsp;and&nbsp;<em>lp-motorcycle.jpg<\/em>, we need to forward them into the CR-NET network:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\" style=\"font-size: 11px\">.\/darknet detector test lp-recognition.data lp-recognition.cfg lp-recognition.weights -thresh .5 &lt;&lt;&lt; lp-motorcycle.jpg<br>.\/darknet detector test lp-recognition.data lp-recognition.cfg lp-recognition.weights -thresh .5 &lt;&lt;&lt; lp-car.jpg<\/pre>\n\n\n\n<p>Afterward, you should see the images below:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-1312\" src=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-lpr.jpg\" alt=\"\" width=\"350\" height=\"70\" srcset=\"https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-lpr.jpg 338w, https:\/\/web.inf.ufpr.br\/vri\/wp-content\/uploads\/sites\/7\/2019\/09\/predictions-lpr-300x60.jpg 300w\" sizes=\"(max-width: 350px) 100vw, 350px\" \/><\/figure>\n\n\n\n<p>As can be seen, in both cases, all license plate characters were correctly recognized.<\/p>\n\n\n\n<p>We also designed heuristic rules to adapt the results produced by CR-NET according to the predicted layout class (see our paper for more details). For example, based on the datasets employed in our work, we consider that Taiwanese license plates have between 5 and 6 characters. Thus, no changes were performed on the predictions shown above, as they already fit the heuristics rules.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"contact\"><strong>4. Contact&nbsp;<\/strong><\/h3>\n\n\n\n<p>A list of all papers on ALPR published by us can be seen&nbsp;<strong><a href=\"https:\/\/scholar.google.com\/scholar?hl=pt-BR&amp;as_sdt=0%2C5&amp;as_ylo=2018&amp;q=allintitle%3A+plate+OR+license+OR+vehicle+author%3A%22David+Menotti%22&amp;btnG=\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a><\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"contact\"><strong>5. Contact&nbsp;<\/strong><\/h3>\n\n\n\n<p>Please contact the first author (<a href=\"mailto:rblsantos@inf.ufpr.br\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\"Rayson Laroca (opens in a new tab)\">Rayson Laroca<\/a>) with questions or comments.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>1. Paper Information&nbsp; 1.1. Authors&nbsp; Rayson Laroca, Luiz A. Zanlorensi, Gabriel R. Gon\u00e7alves, Eduardo Todt, William Robson Schwartz, David Menotti. 1.2. Abstract&nbsp; This paper presents an efficient and layout-independent Automatic License Plate Recognition (ALPR) system based on the state-of-the-art YOLO <a href=\"https:\/\/web.inf.ufpr.br\/vri\/publications\/layout-independent-alpr\/\" class=\"read-more\">Read More &#8230;<\/a><\/p>\n","protected":false},"author":55,"featured_media":0,"parent":1252,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1274","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages\/1274","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/users\/55"}],"replies":[{"embeddable":true,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/comments?post=1274"}],"version-history":[{"count":57,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages\/1274\/revisions"}],"predecessor-version":[{"id":1993,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages\/1274\/revisions\/1993"}],"up":[{"embeddable":true,"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/pages\/1252"}],"wp:attachment":[{"href":"https:\/\/web.inf.ufpr.br\/vri\/wp-json\/wp\/v2\/media?parent=1274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}