{"id":41939,"date":"2024-09-19T16:00:00","date_gmt":"2024-09-19T14:00:00","guid":{"rendered":"https:\/\/ritme.com\/storm-reply-intel-oneapi\/"},"modified":"2024-09-20T11:13:55","modified_gmt":"2024-09-20T09:13:55","slug":"storm-reply-intel-oneapi","status":"publish","type":"post","link":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/","title":{"rendered":"Storm Reply improves LLM performance with EC2 C7i and Intel\u00ae oneAPI instances"},"content":{"rendered":"\n<p><strong>Storm Reply, a leader in cloud solutions, has chosen Amazon EC2 C7i instances, powered by 4th Gen Intel\u00ae Xeon\u00ae processors, to optimize its large language models (LLM). By leveraging Intel\u00ae oneAPI tools, Storm Reply has achieved GPU-level performance while optimizing costs.<\/strong><\/p>\n\n\n\n<p>Storm Reply specializes in helping clients deploy generative AI and LLM solutions. To meet the needs of a large energy company, Storm Reply needed to find an affordable and high-availability hosting solution for its LLM workloads.<\/p>\n\n\n\n<div class=\"wp-block-getwid-advanced-spacer\" style=\"height:30px\" aria-hidden=\"true\"><\/div>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/storm-Reply-Intel-1024x576.jpg\" alt=\"Storm Reply improves LLM performance with EC2 C7i and Intel\u00ae oneAPI instances\" class=\"wp-image-41753\" style=\"width:650px\" srcset=\"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/storm-Reply-Intel-1024x576.jpg 1024w, https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/storm-Reply-Intel-500x281.jpg 500w, https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/storm-Reply-Intel-768x432.jpg 768w, https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/storm-Reply-Intel-1536x864.jpg 1536w, https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/storm-Reply-Intel.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/div>\n\n\n<div class=\"wp-block-getwid-advanced-spacer\" style=\"height:45px\" aria-hidden=\"true\"><\/div>\n\n\n\n<p>Following an in-depth evaluation, Storm Reply selected Amazon EC2 C7i instances, powered by 4th Gen Intel\u00ae Xeon\u00ae Scalable processors. This infrastructure has proven to be ideal for LLM workloads, particularly due to the integration of Intel libraries and the Intel\u00ae GenAI framework.<\/p>\n\n\n\n<p>Thanks to the optimizations of the Intel\u00ae Extension for PyTorch and <a href=\"https:\/\/ritme.com\/en\/software\/intel-oneapi\/\"><mark style=\"padding:0px;background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-ritme-blue-color\">oneAPI Toolkit<\/mark><\/a>, Storm Reply was able not only to improve its models&#8217; performance but also to significantly reduce costs. Testing revealed that LLM inference with Intel Xeon Scalable processors achieved a response time of 92 seconds, compared to 485 seconds without Intel optimizations. The results of Storm Reply show that this CPU-based solution rivals GPU environments in terms of price-performance ratio.<\/p>\n\n\n\n<p>The EC2 C7i instances enable Storm Reply to offer its customers robust AI solutions, while ensuring optimized resource and cost management.<\/p>\n\n\n\n<div class=\"wp-block-getwid-advanced-spacer\" style=\"height:40px\" aria-hidden=\"true\"><\/div>\n\n\n\n<p><em>To read the full article,<\/em> <a href=\"https:\/\/www.intel.com\/content\/www\/us\/en\/customer-spotlight\/stories\/storm-reply-customer-story.html\" target=\"_blank\" rel=\"noreferrer noopener\"><mark style=\"padding:0px;background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-ritme-blue-color\">click here<\/mark><\/a>.<\/p>\n\n\n\n<div class=\"wp-block-getwid-advanced-spacer\" style=\"height:60px\" aria-hidden=\"true\"><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Storm Reply, a leader in cloud solutions, has chosen Amazon EC2 C7i instances, powered by 4th Gen Intel\u00ae Xeon\u00ae processors, to optimize its large language models (LLM).<\/p>\n","protected":false},"author":3370,"featured_media":41943,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"_uag_custom_page_level_css":"","footnotes":""},"categories":[159],"tags":[],"class_list":["post-41939","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-labs-solutions-en"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.9 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Storm Reply improves LLM performance with Intel oneAPI - Ritme<\/title>\n<meta name=\"description\" content=\"Find out in this article how cloud solutions expert Storm Reply is enhancing its large language models (LLMs) with Intel oneAPI.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/\" \/>\n<meta property=\"og:locale\" content=\"en_GB\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Storm Reply improves LLM performance with Intel oneAPI - Ritme\" \/>\n<meta property=\"og:description\" content=\"Find out in this article how cloud solutions expert Storm Reply is enhancing its large language models (LLMs) with Intel oneAPI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/\" \/>\n<meta property=\"og:site_name\" content=\"Ritme\" \/>\n<meta property=\"article:published_time\" content=\"2024-09-19T14:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-09-20T09:13:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png\" \/>\n\t<meta property=\"og:image:width\" content=\"674\" \/>\n\t<meta property=\"og:image:height\" content=\"477\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Olivier Ritme\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Olivier Ritme\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimated reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/\"},\"author\":{\"name\":\"Olivier Ritme\",\"@id\":\"https:\/\/ritme.com\/en\/#\/schema\/person\/602d787f042cd53a4689a969a54b4d96\"},\"headline\":\"Storm Reply improves LLM performance with EC2 C7i and Intel\u00ae oneAPI instances\",\"datePublished\":\"2024-09-19T14:00:00+00:00\",\"dateModified\":\"2024-09-20T09:13:55+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/\"},\"wordCount\":233,\"publisher\":{\"@id\":\"https:\/\/ritme.com\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png\",\"articleSection\":[\"Solutions\"],\"inLanguage\":\"en-GB\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/\",\"url\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/\",\"name\":\"Storm Reply improves LLM performance with Intel oneAPI - Ritme\",\"isPartOf\":{\"@id\":\"https:\/\/ritme.com\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png\",\"datePublished\":\"2024-09-19T14:00:00+00:00\",\"dateModified\":\"2024-09-20T09:13:55+00:00\",\"description\":\"Find out in this article how cloud solutions expert Storm Reply is enhancing its large language models (LLMs) with Intel oneAPI.\",\"breadcrumb\":{\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#breadcrumb\"},\"inLanguage\":\"en-GB\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#primaryimage\",\"url\":\"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png\",\"contentUrl\":\"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png\",\"width\":674,\"height\":477,\"caption\":\"Storm Reply improves LLM performance with EC2 C7i and Intel\u00ae oneAPI instances\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/ritme.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Solutions\",\"item\":\"https:\/\/ritme.com\/en\/category\/labs-solutions-en\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Storm Reply improves LLM performance with EC2 C7i and Intel\u00ae oneAPI instances\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ritme.com\/en\/#website\",\"url\":\"https:\/\/ritme.com\/en\/\",\"name\":\"Ritme\",\"description\":\"The strategic partner of research teams\",\"publisher\":{\"@id\":\"https:\/\/ritme.com\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/ritme.com\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-GB\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/ritme.com\/en\/#organization\",\"name\":\"RITME\",\"url\":\"https:\/\/ritme.com\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/ritme.com\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/ritme.com\/wp-content\/uploads\/2021\/06\/Ritme_200x86.svg\",\"contentUrl\":\"https:\/\/ritme.com\/wp-content\/uploads\/2021\/06\/Ritme_200x86.svg\",\"caption\":\"RITME\"},\"image\":{\"@id\":\"https:\/\/ritme.com\/en\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/ritme.com\/en\/#\/schema\/person\/602d787f042cd53a4689a969a54b4d96\",\"name\":\"Olivier Ritme\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/ritme.com\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5e766af435a9d437904f33762395f31273c9bfbb003cd5e2a6b1c2536fa1f594?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5e766af435a9d437904f33762395f31273c9bfbb003cd5e2a6b1c2536fa1f594?s=96&d=mm&r=g\",\"caption\":\"Olivier Ritme\"},\"sameAs\":[\"http:\/\/adminRitme\"],\"url\":\"https:\/\/ritme.com\/en\/author\/olivier\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Storm Reply improves LLM performance with Intel oneAPI - Ritme","description":"Find out in this article how cloud solutions expert Storm Reply is enhancing its large language models (LLMs) with Intel oneAPI.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/","og_locale":"en_GB","og_type":"article","og_title":"Storm Reply improves LLM performance with Intel oneAPI - Ritme","og_description":"Find out in this article how cloud solutions expert Storm Reply is enhancing its large language models (LLMs) with Intel oneAPI.","og_url":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/","og_site_name":"Ritme","article_published_time":"2024-09-19T14:00:00+00:00","article_modified_time":"2024-09-20T09:13:55+00:00","og_image":[{"width":674,"height":477,"url":"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png","type":"image\/png"}],"author":"Olivier Ritme","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Olivier Ritme","Estimated reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#article","isPartOf":{"@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/"},"author":{"name":"Olivier Ritme","@id":"https:\/\/ritme.com\/en\/#\/schema\/person\/602d787f042cd53a4689a969a54b4d96"},"headline":"Storm Reply improves LLM performance with EC2 C7i and Intel\u00ae oneAPI instances","datePublished":"2024-09-19T14:00:00+00:00","dateModified":"2024-09-20T09:13:55+00:00","mainEntityOfPage":{"@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/"},"wordCount":233,"publisher":{"@id":"https:\/\/ritme.com\/en\/#organization"},"image":{"@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#primaryimage"},"thumbnailUrl":"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png","articleSection":["Solutions"],"inLanguage":"en-GB"},{"@type":"WebPage","@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/","url":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/","name":"Storm Reply improves LLM performance with Intel oneAPI - Ritme","isPartOf":{"@id":"https:\/\/ritme.com\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#primaryimage"},"image":{"@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#primaryimage"},"thumbnailUrl":"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png","datePublished":"2024-09-19T14:00:00+00:00","dateModified":"2024-09-20T09:13:55+00:00","description":"Find out in this article how cloud solutions expert Storm Reply is enhancing its large language models (LLMs) with Intel oneAPI.","breadcrumb":{"@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#breadcrumb"},"inLanguage":"en-GB","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/"]}]},{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#primaryimage","url":"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png","contentUrl":"https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png","width":674,"height":477,"caption":"Storm Reply improves LLM performance with EC2 C7i and Intel\u00ae oneAPI instances"},{"@type":"BreadcrumbList","@id":"https:\/\/ritme.com\/en\/storm-reply-intel-oneapi\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/ritme.com\/en\/"},{"@type":"ListItem","position":2,"name":"Solutions","item":"https:\/\/ritme.com\/en\/category\/labs-solutions-en\/"},{"@type":"ListItem","position":3,"name":"Storm Reply improves LLM performance with EC2 C7i and Intel\u00ae oneAPI instances"}]},{"@type":"WebSite","@id":"https:\/\/ritme.com\/en\/#website","url":"https:\/\/ritme.com\/en\/","name":"Ritme","description":"The strategic partner of research teams","publisher":{"@id":"https:\/\/ritme.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ritme.com\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-GB"},{"@type":"Organization","@id":"https:\/\/ritme.com\/en\/#organization","name":"RITME","url":"https:\/\/ritme.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/ritme.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/ritme.com\/wp-content\/uploads\/2021\/06\/Ritme_200x86.svg","contentUrl":"https:\/\/ritme.com\/wp-content\/uploads\/2021\/06\/Ritme_200x86.svg","caption":"RITME"},"image":{"@id":"https:\/\/ritme.com\/en\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/ritme.com\/en\/#\/schema\/person\/602d787f042cd53a4689a969a54b4d96","name":"Olivier Ritme","image":{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/ritme.com\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/5e766af435a9d437904f33762395f31273c9bfbb003cd5e2a6b1c2536fa1f594?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5e766af435a9d437904f33762395f31273c9bfbb003cd5e2a6b1c2536fa1f594?s=96&d=mm&r=g","caption":"Olivier Ritme"},"sameAs":["http:\/\/adminRitme"],"url":"https:\/\/ritme.com\/en\/author\/olivier\/"}]}},"uagb_featured_image_src":{"full":["https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png",674,477,false],"thumbnail":["https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant-300x300.png",300,300,true],"medium":["https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant-500x354.png",500,354,true],"medium_large":["https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png",674,477,false],"large":["https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png",674,477,false],"1536x1536":["https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png",674,477,false],"2048x2048":["https:\/\/ritme.com\/wp-content\/uploads\/2024\/09\/EN_img_Intel-oneAPI_Q3-2024_mise-en-avant.png",674,477,false]},"uagb_author_info":{"display_name":"Olivier Ritme","author_link":"https:\/\/ritme.com\/en\/author\/olivier\/"},"uagb_comment_info":0,"uagb_excerpt":"Storm Reply, a leader in cloud solutions, has chosen Amazon EC2 C7i instances, powered by 4th Gen Intel\u00ae Xeon\u00ae processors, to optimize its large language models (LLM).","_links":{"self":[{"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/posts\/41939","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/users\/3370"}],"replies":[{"embeddable":true,"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/comments?post=41939"}],"version-history":[{"count":3,"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/posts\/41939\/revisions"}],"predecessor-version":[{"id":41952,"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/posts\/41939\/revisions\/41952"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/media\/41943"}],"wp:attachment":[{"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/media?parent=41939"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/categories?post=41939"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ritme.com\/en\/wp-json\/wp\/v2\/tags?post=41939"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}