Advanced algorithms for the analysis of big data’s autocorrelations
Generally, standard big data analysis algorithms are based upon deductive and inductive principles: a distributive model is associated with the data and on which mathematical deductions are realized. In order to reset the results from the model to the real information set, it is necessary to create a series of reliability’s coefficients which are deterministically uncertain. Although this analysis are considered very sophisticated, their results’ interpretation can completely change their reliability, or even the whole meaning.
At this time, there are no existing structures that make use of big volumes of information to deduct trends and hidden correlations; however, the few available examples show the huge potential that could be applied to fields such as finance and trading or insurances and advanced marketing.
In Big Data field, HTE is developing a new series of algorithms capable to determine deductively the laws of autocorrelation of the information, without subjecting the analysis to an abstract distributive model, but instead interpreting the information where they are found: in data.