Size: a a a

AI / Искусственный Интеллект

2019 July 23

DP

Defragmented Panda in AI / Искусственный Интеллект
Bogdan Kirillov
Also you can add stuff like differential plasticity to improve your learning
12 days, not 12 months :P
источник

BK

Bogdan Kirillov in AI / Искусственный Интеллект
Luke Skywalker
Is this part of NN?
Up
источник

LS

Luke Skywalker in AI / Искусственный Интеллект
Okay
источник

BK

Bogdan Kirillov in AI / Искусственный Интеллект
Defragmented Panda
12 days, not 12 months :P
Easy stuff, uber has made a package
источник

DP

Defragmented Panda in AI / Искусственный Интеллект
i dont think it makes sense for just 1800 samples

way simpler to learn every possible connection (fully connected layers. resources allow it) and delete weak connections (he needs a simple model for presentation)
источник

LS

Luke Skywalker in AI / Искусственный Интеллект
Only 80% of 1800 actually to be Train
источник

DP

Defragmented Panda in AI / Искусственный Интеллект
heavy dropout (50%) can help avoid overfiting even on small dataset like yours
источник

LS

Luke Skywalker in AI / Искусственный Интеллект
Okay I need to read more on this. Not good with nn
источник

DP

Defragmented Panda in AI / Искусственный Интеллект
good part about nn is that if there is ANY correlation between the data and answer, it will likely find it.

and it can be simplified by removing all of the connections, untill you have just a hundred non-zero weights left (prediction power will drop. but it will be easier to explain)
источник

LS

Luke Skywalker in AI / Искусственный Интеллект
Okay NN will need no feature selection I think ?
источник

DP

Defragmented Panda in AI / Искусственный Интеллект
ideal input for NN for your task is a float number
источник

LS

Luke Skywalker in AI / Искусственный Интеллект
Do I need to normalise the data or scale it ?
источник

DP

Defragmented Panda in AI / Искусственный Интеллект
'feature selection' is a part of data analysis that is considered to be done by NN
источник

LS

Luke Skywalker in AI / Искусственный Интеллект
Yes
источник

DP

Defragmented Panda in AI / Искусственный Интеллект
Luke Skywalker
Do I need to normalise the data or scale it ?
ideally if you will normalize it so that minimum is 0 and maximum is 1
источник

LS

Luke Skywalker in AI / Искусственный Интеллект
Okay
источник

DP

Defragmented Panda in AI / Искусственный Интеллект
if you have any data that differs more than 100 times from non-zero min to max, then log of data also helps. blood analysis can give such data
источник

LS

Luke Skywalker in AI / Искусственный Интеллект
I will go for normalise
источник

LS

Luke Skywalker in AI / Искусственный Интеллект
Between 0 and 1
источник

DP

Defragmented Panda in AI / Искусственный Интеллект
Defragmented Panda
if you have any data that differs more than 100 times from non-zero min to max, then log of data also helps. blood analysis can give such data
this might be required in addition to normalization if you have a data like 0.00001 and 1.0 in the same 'column'
источник