September 16, 2021 at 12:48 am
Hi all,
I need help to extract data from a xml field. My data like below. Any help will be highly appreciate
CustID Xmldata
1 <attributes>
<attribute name="HR Patient Range Type" value="CAD (active)" />
<attribute name="Trigger Value" value="97" />
<attribute name="Lower Limit Threshold" value="45" />
<attribute name="Upper Limit Threshold" value="75" />
</attributes>
I'd like to have:
CustID Attribute Value
1 HR Patient Range Type CAD(Active)
1 Lower Limit Threshold 45
1 Trigger Value 97
1 Upper Limit Threshold 75
Thanks in advance
September 16, 2021 at 9:01 am
Welcome to the wonderfully non-intuitive world of XQuery! Here is some T-SQL which does the job. If you want to understand exactly how it works ................ ([Phil starts whistling and gazing into the distance])
DROP TABLE IF EXISTS #SomeTab;
CREATE TABLE #SomeTab
(
CustId INT NOT NULL
,XMLData XML NOT NULL
);
INSERT #SomeTab
(
CustId
,XMLData
)
VALUES
(1
,'<attributes>
<attribute name="HR Patient Range Type" value="CAD (active)" />
<attribute name="Trigger Value" value="97" />
<attribute name="Lower Limit Threshold" value="45" />
<attribute name="Upper Limit Threshold" value="75" />
</attributes>');
SELECT st.CustId
,name = v.n1.value('(@*:name)[1]', 'varchar(100)')
,value = v.n1.value('(@*:value)[1]', 'varchar(100)')
FROM #SomeTab st
CROSS APPLY st.XMLData.nodes('//*:attributes/*:attribute') v(n1);
September 16, 2021 at 5:10 pm
really great! it works like magic. Thanks a million.
September 16, 2021 at 10:24 pm
Just as a sidebar in personal frustration, XML never ceases to amaze me in its inefficiencies. The original XML in the original post occupies 244 characters. Even if you convert it to something like the following where an "=" separates the tag from the value and tag/value pairs are separated by TABs and "records" are separated by "Line Feed" characters, you get 100 characters, which is only ~41% of the size of the original.
Even better is if you need to transmit 1000's of such rows, then a small manifest file that identifies the field positions and tags for the fields as well as the data type (think CREATE TABLE, which would remove all questions and make it super simple) and the data file only contain delimiter separated fields and rows, you be down to just 21 bytes per row, which is only 8.6% of what the original tag-bloated "XML Row" contained.
So, to coin a phrase, just because you can use XML (or JSON or any other tagged data), doesn't mean you should. 😀 But, people keep doing it because XML and JSON are "easy to use", RIGHT? I think this post demonstrates that really not true.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 17, 2021 at 10:05 am
I think that XML does have legitimate uses, but large-scale data storage and retrieval is not one of them. I presume a lot of people spent a lot of time working out the implementation of XQuery, but it still seems opaque and abstruse when compared with most other T-SQL.
It seems good for individual hierarchical objects though, where open formats and flexibility are more important than throughput speed.
Viewing 5 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply