This is likely due to current Proactive Rebalance mechanism (with default settings) due to having a relatively low max data transfer target rate (so as not to potentially cause storage contention) and that it can be lazy with regard to the % variance reached e.g. it may just be rebalancing it past the variance threshold and then the next day the thin components grow on the other disks pushing it back past the 30% varince health check trigger. Thankfully the mechanisms and UX of this look to be improved in the future and it should be more simple to tune than it is now.
Then again, other things could potentially be causing it to be yellow/green over time: data migrations or deletions, changing of Storage Policies (especially so in smaller clusters with relatively large Objects (proportionally to their disk/Disk-Group size), relatively fast growth in some but not all vmdk or snapshot Objects, random administrators putting hosts in MM for longer than an hour (with default settings) while not being aware of vSAN and so forth.
Check how much it is moving and where, consider increasing the rate and/or lowering the threshold if you want it to move more (in less time) and more balanced but do understand that moving 50% of the data around the cluster to have disks as near balanced as possible is probably overkill (and obviously bear in mind other storage traffic - the vSAN vSphere Performance graphs are your friend here).
Follow the link I posted in the previous comment, it is basically the man page for vSAN RVC commands and shows all the configurable variables regarding proactive rebalance.
(bonus tip: default rate is 51200MB, -v switch takes decimal e.g. for 20% max variance target -v .20 )
Bob