Computers implement floating point numbers at various precisions in order to speed and simplify computations based on Measurement. In return, algorithm designers must pay careful attention to loss of significance induced by, say, order of operations. Decimal floating point attempts to lighten that burden without actually removing it. wikipedia
When used to represent numbers, decimal floating point promises to be less wrong, but still wrong.
See On Numbers
Decimal floating point will be useful in the brief period between when computer manufacturers choose to implement the new IEEE standard and their computers become sufficiently large and fast to not need it.
By "brief" I mean non-existent.
Computers were sufficiently large and fast to do unlimited precision arithmetic in 1970. The LargeInteger and Rational abstractions were standard Smalltalk in 1980. The decimal floating point standard, IEEE-854, appeared in 1987.